idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
24,101
When (if ever) is it a good idea to do a post hoc power analysis?
In my field I see people doing post-hoc power analyses when the purpose of the paper is to show that some effect that one might have expected to be present (either because of previous literature, common sense, etc) is not, at least according to some significance test. However, in these situations, the researcher is in a bit of a bind -- he or she may have obtained a non-significant result either because the effect really is not present in the population or because the study was not sufficiently powered to detect the effect even if it were present. The purpose of the power analysis, then, is to show that, given even a trivially small effect in the population, the study would have had a high probability of detecting that effect. For a concrete example of this use of post-hoc power analysis, see this linked paper.
When (if ever) is it a good idea to do a post hoc power analysis?
In my field I see people doing post-hoc power analyses when the purpose of the paper is to show that some effect that one might have expected to be present (either because of previous literature, comm
When (if ever) is it a good idea to do a post hoc power analysis? In my field I see people doing post-hoc power analyses when the purpose of the paper is to show that some effect that one might have expected to be present (either because of previous literature, common sense, etc) is not, at least according to some significance test. However, in these situations, the researcher is in a bit of a bind -- he or she may have obtained a non-significant result either because the effect really is not present in the population or because the study was not sufficiently powered to detect the effect even if it were present. The purpose of the power analysis, then, is to show that, given even a trivially small effect in the population, the study would have had a high probability of detecting that effect. For a concrete example of this use of post-hoc power analysis, see this linked paper.
When (if ever) is it a good idea to do a post hoc power analysis? In my field I see people doing post-hoc power analyses when the purpose of the paper is to show that some effect that one might have expected to be present (either because of previous literature, comm
24,102
When (if ever) is it a good idea to do a post hoc power analysis?
You can always compute the probability that a study would have produce a significant result for a given a priori effect size. In theory, this should be done before a study is conducted because there is no point in carrying out a study with low power that has a low chance to produce a significant result when an effect is present. However, you can also compute power after the study to realize that a study had low power or, unlikely, high power to detect even a small effect. The term post-hoc or observed power is used for power-analysis that use observed effect sizes in a sample to compute power under the assumption that the observed effect size is a reasonable estimate of the true effect size. Many statisticians have pointed out that observed power in a single study is not very informative because effect sizes are not estimated with sufficient precision to be informative. More recently, researchers have started to examine observed power for a set of studies to examine how powerful studies are on average and whether studies report more significant results than the actual power of studies would justify. https://replicationindex.wordpress.com/tag/observed-power/
When (if ever) is it a good idea to do a post hoc power analysis?
You can always compute the probability that a study would have produce a significant result for a given a priori effect size. In theory, this should be done before a study is conducted because there
When (if ever) is it a good idea to do a post hoc power analysis? You can always compute the probability that a study would have produce a significant result for a given a priori effect size. In theory, this should be done before a study is conducted because there is no point in carrying out a study with low power that has a low chance to produce a significant result when an effect is present. However, you can also compute power after the study to realize that a study had low power or, unlikely, high power to detect even a small effect. The term post-hoc or observed power is used for power-analysis that use observed effect sizes in a sample to compute power under the assumption that the observed effect size is a reasonable estimate of the true effect size. Many statisticians have pointed out that observed power in a single study is not very informative because effect sizes are not estimated with sufficient precision to be informative. More recently, researchers have started to examine observed power for a set of studies to examine how powerful studies are on average and whether studies report more significant results than the actual power of studies would justify. https://replicationindex.wordpress.com/tag/observed-power/
When (if ever) is it a good idea to do a post hoc power analysis? You can always compute the probability that a study would have produce a significant result for a given a priori effect size. In theory, this should be done before a study is conducted because there
24,103
Expected number I will be on after drawing cards until I get an ace, 2, 3, and so forth
following @gung's idea, i believe the expected value would be 5.84? and from my interpretation of the comments, i'm assuming "A" is an almost impossible value (unless the last four cards in the deck are all aces). here are the results of a 100,000 iteration monte carlo simulation results 2 3 4 5 6 7 8 9 J K Q T 1406 7740 16309 21241 19998 15127 9393 4906 976 190 380 2334 and here's the R code in case you'd like to play with it.. # monte carlo card-drawing functions from here # http://streaming.stat.iastate.edu/workshops/r-intro/lectures/5-Rprogramming.pdf # create a straightforward deck of cards create_deck <- function( ){ suit <- c( "H" , "C" , "D" , "S" ) rank <- c( "A" , 2:9 , "T" , "J" , "Q" , "K" ) deck <- NULL for ( r in rank ) deck <- c( deck , paste( r , suit ) ) deck } # construct a function to shuffle everything shuffle <- function( deck ){ sample( deck , length( deck ) ) } # draw one card at a time draw_cards <- function( deck , start , n = 1 ){ cards <- NULL for ( i in start:( start + n - 1 ) ){ if ( i <= length( deck ) ){ cards <- c( cards , deck[ i ] ) } } return( cards ) } # create an empty vector for your results results <- NULL # run your simulation this many times.. for ( i in seq( 100000 ) ){ # create a new deck sdeck <- shuffle( create_deck() ) d <- sdeck[ grep('A|2' , sdeck ) ] e <- identical( grep( "2" , d ) , 1:4 ) # loop through ranks in this order rank <- c( "A" , 2:9 , "T" , "J" , "Q" , "K" ) # start at this position card.position <- 0 # start with a blank current.draw current.draw <- "" # start with a blank current rank this.rank <- NULL # start with the first rank rank.position <- 1 # keep drawing until you find the rank you wanted while( card.position < 52 ){ # increase the position by one every time card.position <- card.position + 1 # store the current draw for testing next time current.draw <- draw_cards( sdeck , card.position ) # if you draw the current rank, move to the next. if ( grepl( rank[ rank.position ] , current.draw ) ) rank.position <- rank.position + 1 # if you have gone through every rank and are still not out of cards, # should it still be a king? this assumes yes. if ( rank.position == length( rank ) ) break } # store the rank for this iteration. this.rank <- rank[ rank.position ] # at the end of the iteration, store the result results <- c( results , this.rank ) } # print the final results table( results ) # make A, T, J, Q, K numerics results[ results == 'A' ] <- 1 results[ results == 'T' ] <- 10 results[ results == 'J' ] <- 11 results[ results == 'Q' ] <- 12 results[ results == 'K' ] <- 13 results <- as.numeric( results ) # and here's your expected value after 100,000 simulations. mean( results )
Expected number I will be on after drawing cards until I get an ace, 2, 3, and so forth
following @gung's idea, i believe the expected value would be 5.84? and from my interpretation of the comments, i'm assuming "A" is an almost impossible value (unless the last four cards in the deck
Expected number I will be on after drawing cards until I get an ace, 2, 3, and so forth following @gung's idea, i believe the expected value would be 5.84? and from my interpretation of the comments, i'm assuming "A" is an almost impossible value (unless the last four cards in the deck are all aces). here are the results of a 100,000 iteration monte carlo simulation results 2 3 4 5 6 7 8 9 J K Q T 1406 7740 16309 21241 19998 15127 9393 4906 976 190 380 2334 and here's the R code in case you'd like to play with it.. # monte carlo card-drawing functions from here # http://streaming.stat.iastate.edu/workshops/r-intro/lectures/5-Rprogramming.pdf # create a straightforward deck of cards create_deck <- function( ){ suit <- c( "H" , "C" , "D" , "S" ) rank <- c( "A" , 2:9 , "T" , "J" , "Q" , "K" ) deck <- NULL for ( r in rank ) deck <- c( deck , paste( r , suit ) ) deck } # construct a function to shuffle everything shuffle <- function( deck ){ sample( deck , length( deck ) ) } # draw one card at a time draw_cards <- function( deck , start , n = 1 ){ cards <- NULL for ( i in start:( start + n - 1 ) ){ if ( i <= length( deck ) ){ cards <- c( cards , deck[ i ] ) } } return( cards ) } # create an empty vector for your results results <- NULL # run your simulation this many times.. for ( i in seq( 100000 ) ){ # create a new deck sdeck <- shuffle( create_deck() ) d <- sdeck[ grep('A|2' , sdeck ) ] e <- identical( grep( "2" , d ) , 1:4 ) # loop through ranks in this order rank <- c( "A" , 2:9 , "T" , "J" , "Q" , "K" ) # start at this position card.position <- 0 # start with a blank current.draw current.draw <- "" # start with a blank current rank this.rank <- NULL # start with the first rank rank.position <- 1 # keep drawing until you find the rank you wanted while( card.position < 52 ){ # increase the position by one every time card.position <- card.position + 1 # store the current draw for testing next time current.draw <- draw_cards( sdeck , card.position ) # if you draw the current rank, move to the next. if ( grepl( rank[ rank.position ] , current.draw ) ) rank.position <- rank.position + 1 # if you have gone through every rank and are still not out of cards, # should it still be a king? this assumes yes. if ( rank.position == length( rank ) ) break } # store the rank for this iteration. this.rank <- rank[ rank.position ] # at the end of the iteration, store the result results <- c( results , this.rank ) } # print the final results table( results ) # make A, T, J, Q, K numerics results[ results == 'A' ] <- 1 results[ results == 'T' ] <- 10 results[ results == 'J' ] <- 11 results[ results == 'Q' ] <- 12 results[ results == 'K' ] <- 13 results <- as.numeric( results ) # and here's your expected value after 100,000 simulations. mean( results )
Expected number I will be on after drawing cards until I get an ace, 2, 3, and so forth following @gung's idea, i believe the expected value would be 5.84? and from my interpretation of the comments, i'm assuming "A" is an almost impossible value (unless the last four cards in the deck
24,104
Expected number I will be on after drawing cards until I get an ace, 2, 3, and so forth
For a simulation it's crucial to be correct as well as fast. Both these objectives suggest writing code that targets core capabilities of the programming environment as well as code that is as short and simple as possible, because simplicity lends clarity and clarity promotes correctness. Here is my attempt to achieve both in R: # # Simulate one play with a deck of `n` distinct cards in `k` suits. # sim <- function(n=13, k=4) { deck <- sample(rep(1:n, k)) # Shuffle the deck deck <- c(deck, 1:n) # Add sentinels to terminate the loop k <- 0 # Count the cards searched for for (j in 1:n) { k <- k+1 # Count this card deck <- deck[-(1:match(j, deck))] # Deal cards until `j` is found if (length(deck) < n) break # Stop when sentinels are reached } return(k) # Return the number of cards searched } Applying this in a reproducible way can be done with the replicate function after setting the random number seed, as in > set.seed(17); system.time(d <- replicate(10^5, sim(13, 4))) user system elapsed 5.46 0.00 5.46 That's slow, but fast enough to conduct fairly lengthy (and therefore precise) simulations repeatedly without waiting. There are several ways we can exhibit the result. Let's start with its mean: > n <- length(d) > mean(d) [1] 5.83488 > sd(d) / sqrt(n) [1] 0.005978956 The latter is the standard error: we expect the simulated mean to be within two or three SEs of the true value. That places the true expectation somewhere between $5.817$ and $5.853$. We might also want to see a tabulation of the frequencies (and their standard errors). The following code prettifies the tabulation a little: u <- table(d) u.se <- sqrt(u/n * (1-u/n)) / sqrt(n) cards <- c("A", "2", "3", "4", "5", "6", "7", "8", "9", "T", "J", "Q", "K") dimnames(u) <- list(sapply(dimnames(u), function(x) cards[as.integer(x)])) print(rbind(frequency=u/n, SE=u.se), digits=2) Here is the output: 2 3 4 5 6 7 8 9 T J Q K frequency 0.01453 0.07795 0.1637 0.2104 0.1995 0.1509 0.09534 0.04995 0.02249 0.01009 0.00345 0.00173 SE 0.00038 0.00085 0.0012 0.0013 0.0013 0.0011 0.00093 0.00069 0.00047 0.00032 0.00019 0.00013 How can we know the simulation is even correct? One way is to test it exhaustively for smaller problems. For that reason this code was written to attack a small generalization of the problem, replacing $13$ distinct cards with n and $4$ suits with k. However, for the testing it is important to be able to feed the code a deck in a predetermined order. Let's write a slightly different interface to the same algorithm: draw <- function(deck) { n <- length(sentinels <- sort(unique(deck))) deck <- c(deck, sentinels) k <- 0 for (j in sentinels) { k <- k+1 deck <- deck[-(1:match(j, deck))] if (length(deck) < n) break } return(k) } (It is possible to use draw in place of sim everywhere, but the extra work done at the beginning of draw makes it twice as slow as sim.) We can use this by applying it to every distinct shuffle of a given deck. Since the purpose here is just a few one-off tests, efficiency in generating those shuffles is unimportant. Here is a quick brute-force way: n <- 4 # Distinct cards k <- 2 # Number of suits d <- expand.grid(lapply(1:(n*k), function(i) 1:n)) e <- apply(d, 1, function(x) var(tabulate(x))==0) g <- apply(d, 1, function(x) length(unique(x))==n) d <- d[e & g,] Now d is a data frame whose rows contain all the shuffles. Apply draw to each row and count the results: d$result <- apply(as.matrix(d), 1, draw) (counts <- table(d$result)) The output (which we will use in a formal test momentarily) is 2 3 4 420 784 1316 (The value of $420$ is easy to understand, by the way: we would still be working on card $2$ if and only if all the twos preceded all the aces. The chance of this happening (with two suits) is $1/\binom{2+2}{2} = 1/6$. Out of the $2520$ distinct shuffles, $2520/6 = 420$ have this property.) We can test the output with a chi-squared test. To this end I apply sim $10,000$ times to this case of $n = 4$ distinct cards in $k = 2$ suits: >set.seed(17) >d.sim <- replicate(10^4, sim(n, k)) >print((rbind(table(d.sim) / length(d.sim), counts / dim(d)[1])), digits=3) 2 3 4 [1,] 0.168 0.312 0.520 [2,] 0.167 0.311 0.522 > chisq.test(table(d.sim), p=counts / dim(d)[1]) Chi-squared test for given probabilities data: table(d.sim) X-squared = 0.2129, df = 2, p-value = 0.899 Because $p$ is so high, we find no significant difference between what sim says and the values computed by exhaustive enumeration. Repeating this exercise for some other (small) values of $n$ and $k$ produces comparable results, giving us ample reason to trust sim when applied to $n=13$ and $k=4$. Finally, a two-sample chi-squared test will compare the output of sim to the output reported in another answer: >y <- c(1660,8414,16973,21495,20021,14549,8957,4546,2087,828,313,109) >chisq.test(cbind(u, y)) data: cbind(u, y) X-squared = 142.2489, df = 11, p-value < 2.2e-16 The enormous chi-squared statistic produces a p-value that is essentially zero: without a doubt, sim disagrees with the other answer. There are two possible resolutions of the disagreement: one (or both!) of these answers is incorrect or they implement different interpretations of the question. For instance, I have interpreted "after the deck runs out" to mean after observing the last card and, if allowable, updating the "number you will be on" before terminating the procedure. Conceivably that last step was not meant to be taken. Perhaps some such subtle difference of interpretation will explain the disagreement, at which point we can modify the question to make it clearer what is being asked.
Expected number I will be on after drawing cards until I get an ace, 2, 3, and so forth
For a simulation it's crucial to be correct as well as fast. Both these objectives suggest writing code that targets core capabilities of the programming environment as well as code that is as short
Expected number I will be on after drawing cards until I get an ace, 2, 3, and so forth For a simulation it's crucial to be correct as well as fast. Both these objectives suggest writing code that targets core capabilities of the programming environment as well as code that is as short and simple as possible, because simplicity lends clarity and clarity promotes correctness. Here is my attempt to achieve both in R: # # Simulate one play with a deck of `n` distinct cards in `k` suits. # sim <- function(n=13, k=4) { deck <- sample(rep(1:n, k)) # Shuffle the deck deck <- c(deck, 1:n) # Add sentinels to terminate the loop k <- 0 # Count the cards searched for for (j in 1:n) { k <- k+1 # Count this card deck <- deck[-(1:match(j, deck))] # Deal cards until `j` is found if (length(deck) < n) break # Stop when sentinels are reached } return(k) # Return the number of cards searched } Applying this in a reproducible way can be done with the replicate function after setting the random number seed, as in > set.seed(17); system.time(d <- replicate(10^5, sim(13, 4))) user system elapsed 5.46 0.00 5.46 That's slow, but fast enough to conduct fairly lengthy (and therefore precise) simulations repeatedly without waiting. There are several ways we can exhibit the result. Let's start with its mean: > n <- length(d) > mean(d) [1] 5.83488 > sd(d) / sqrt(n) [1] 0.005978956 The latter is the standard error: we expect the simulated mean to be within two or three SEs of the true value. That places the true expectation somewhere between $5.817$ and $5.853$. We might also want to see a tabulation of the frequencies (and their standard errors). The following code prettifies the tabulation a little: u <- table(d) u.se <- sqrt(u/n * (1-u/n)) / sqrt(n) cards <- c("A", "2", "3", "4", "5", "6", "7", "8", "9", "T", "J", "Q", "K") dimnames(u) <- list(sapply(dimnames(u), function(x) cards[as.integer(x)])) print(rbind(frequency=u/n, SE=u.se), digits=2) Here is the output: 2 3 4 5 6 7 8 9 T J Q K frequency 0.01453 0.07795 0.1637 0.2104 0.1995 0.1509 0.09534 0.04995 0.02249 0.01009 0.00345 0.00173 SE 0.00038 0.00085 0.0012 0.0013 0.0013 0.0011 0.00093 0.00069 0.00047 0.00032 0.00019 0.00013 How can we know the simulation is even correct? One way is to test it exhaustively for smaller problems. For that reason this code was written to attack a small generalization of the problem, replacing $13$ distinct cards with n and $4$ suits with k. However, for the testing it is important to be able to feed the code a deck in a predetermined order. Let's write a slightly different interface to the same algorithm: draw <- function(deck) { n <- length(sentinels <- sort(unique(deck))) deck <- c(deck, sentinels) k <- 0 for (j in sentinels) { k <- k+1 deck <- deck[-(1:match(j, deck))] if (length(deck) < n) break } return(k) } (It is possible to use draw in place of sim everywhere, but the extra work done at the beginning of draw makes it twice as slow as sim.) We can use this by applying it to every distinct shuffle of a given deck. Since the purpose here is just a few one-off tests, efficiency in generating those shuffles is unimportant. Here is a quick brute-force way: n <- 4 # Distinct cards k <- 2 # Number of suits d <- expand.grid(lapply(1:(n*k), function(i) 1:n)) e <- apply(d, 1, function(x) var(tabulate(x))==0) g <- apply(d, 1, function(x) length(unique(x))==n) d <- d[e & g,] Now d is a data frame whose rows contain all the shuffles. Apply draw to each row and count the results: d$result <- apply(as.matrix(d), 1, draw) (counts <- table(d$result)) The output (which we will use in a formal test momentarily) is 2 3 4 420 784 1316 (The value of $420$ is easy to understand, by the way: we would still be working on card $2$ if and only if all the twos preceded all the aces. The chance of this happening (with two suits) is $1/\binom{2+2}{2} = 1/6$. Out of the $2520$ distinct shuffles, $2520/6 = 420$ have this property.) We can test the output with a chi-squared test. To this end I apply sim $10,000$ times to this case of $n = 4$ distinct cards in $k = 2$ suits: >set.seed(17) >d.sim <- replicate(10^4, sim(n, k)) >print((rbind(table(d.sim) / length(d.sim), counts / dim(d)[1])), digits=3) 2 3 4 [1,] 0.168 0.312 0.520 [2,] 0.167 0.311 0.522 > chisq.test(table(d.sim), p=counts / dim(d)[1]) Chi-squared test for given probabilities data: table(d.sim) X-squared = 0.2129, df = 2, p-value = 0.899 Because $p$ is so high, we find no significant difference between what sim says and the values computed by exhaustive enumeration. Repeating this exercise for some other (small) values of $n$ and $k$ produces comparable results, giving us ample reason to trust sim when applied to $n=13$ and $k=4$. Finally, a two-sample chi-squared test will compare the output of sim to the output reported in another answer: >y <- c(1660,8414,16973,21495,20021,14549,8957,4546,2087,828,313,109) >chisq.test(cbind(u, y)) data: cbind(u, y) X-squared = 142.2489, df = 11, p-value < 2.2e-16 The enormous chi-squared statistic produces a p-value that is essentially zero: without a doubt, sim disagrees with the other answer. There are two possible resolutions of the disagreement: one (or both!) of these answers is incorrect or they implement different interpretations of the question. For instance, I have interpreted "after the deck runs out" to mean after observing the last card and, if allowable, updating the "number you will be on" before terminating the procedure. Conceivably that last step was not meant to be taken. Perhaps some such subtle difference of interpretation will explain the disagreement, at which point we can modify the question to make it clearer what is being asked.
Expected number I will be on after drawing cards until I get an ace, 2, 3, and so forth For a simulation it's crucial to be correct as well as fast. Both these objectives suggest writing code that targets core capabilities of the programming environment as well as code that is as short
24,105
Expected number I will be on after drawing cards until I get an ace, 2, 3, and so forth
There is an exact answer (in the form of a matrix product, presented in point 4 below). A reasonably efficient algorithm to compute it exists, deriving from these observations: A random shuffle of $N+k$ cards can be generated by randomly shuffling $N$ cards and then randomly interspersing the remaining $k$ cards within them. By shuffling only the aces, and then (applying the first observation) interspersing the twos, then the threes, and so on, this problem can be viewed as a chain of thirteen steps. We do need to keep track of more than the value of the card we are seeking. When doing this, however, we don't need to account for the position of the mark relative to all the cards, but only its position relative to cards of equal or smaller value. Imagine placing a mark on the first ace, and then marking the first two found after it, and so on. (If at any stage the deck runs out without displaying the card we are currently seeking, we will leave all cards unmarked.) Let the "place" of each mark (when it exists) be the number of cards of equal or lower value that were dealt when the mark was made (including the marked card itself). The places contain all the essential information. The place after the $i^\text{th}$ mark is made is a random number. For a given deck, the sequence of these places forms a stochastic process. It in fact is a Markov process (with variable transition matrix). An exact answer can therefore be calculated from twelve matrix multiplications. Using these ideas, this machine obtains a value of $5.8325885529019965$ (computing in double precision floating point) in $1/9$ second. This approximation of the exact value $$\frac{1982600579265894785026945331968939023522542569}{339917784579447928182134345929899510000000000}$$ is accurate to all digits shown. The rest of this post provides details, presents a working implementation (in R), and concludes with some comments about the question and the efficiency of the solution. Generating random shuffles of a deck It is actually clearer conceptually and no more complicated mathematically to consider a "deck" (aka multiset) of $N = k_1+k_2+\cdots+k_m$ cards of which there are $k_1$ of the lowest denomination, $k_2$ of the next lowest, and so on. (The question as asked concerns the deck determined by the $13$-vector $(4,4,\ldots,4)$.) A "random shuffle" of $N$ cards is one permutation taken uniformly and randomly from the $N! = N\times(N-1)\times\cdots\times 2\times 1$ permutations of the $N$ cards. These shuffles fall into groups of equivalent configurations because permuting the $k_1$ "aces" among themselves changes nothing, permuting the $k_2$ "twos" among themselves also changes nothing, and so on. Therefore each group of permutations that look identical when the suits of the cards are ignored contains $k_1!\times k_2!\times \cdots \times k_m!$ permutations. These groups, whose number therefore is given by the multinomial coefficient $$\binom{N}{k_1,k_2,\ldots,k_m} = \frac{N!}{k_1!k_2!\cdots k_m!},$$ are called "combinations" of the deck. There is another way to count the combinations. The first $k_1$ cards can form only $k_1!/k_1! = 1$ combination. They leave $k_1+1$ "slots" between and around them in which the next $k_2$ cards can be placed. We could indicate this with a diagram where "$*$" designates one of the $k_1$ cards and "$\_$" designates a slot that can hold between $0$ and $k_2$ additional cards: $$\underbrace{\_*\_*\_\cdots\_*\_}_{k_1\text{ stars}}$$ When $k_2$ additional cards are interspersed, the pattern of stars and new cards partitions the $k_1+k_2$ cards into two subsets. The number of distinct such subsets is $\binom{k_1+k_2}{k_1,k_2} = \frac{(k_1+k_2)!}{k_1!k_2!}$. Repeating this procedure with $k_3$ "threes," we find there are $\binom{(k_1+k_2)+k_3}{k_1+k_2,k_3}= \frac{(k_1+k_2+k_3)!}{(k_1+k_2)!k_3!}$ ways to intersperse them among the first $k_1+k_2$ cards. Therefore the total number of distinct ways to arrange the first $k_1+k_2+k_3$ cards in this manner equals $$1\times\frac{(k_1+k_2)!}{k_1!k_2!}\times\frac{(k_1+k_2+k_3)!}{(k_1+k_2)!k_3!} = \frac{(k_1+k_2+k_3)!}{k_1!k_2!k_3!}.$$ After finishing the last $k_n$ cards and continuing to multiply these telescoping fractions, we find that the number of distinct combinations obtained equals the total number of combinations as previously counted, $\binom{N}{k_1,k_2,\ldots,k_m}$. Therefore we have overlooked no combinations. That means this sequential process of shuffling the cards correctly captures the probabilities of each combination, assuming that at each stage each possible distinct way of interspersing the new cards among the old is taken with uniformly equal probability. The place process Initially, there are $k_1$ aces and obviously the very first is marked. At later stages there are $n = k_1 + k_2 + \cdots + k_{j-1}$ cards, the place (if a marked card exists) equals $p$ (some value from $1$ through $n$), and we are about to intersperse $k=k_j$ cards around them. We can visualize this with a diagram like $$\underbrace{\_*\_*\_\cdots\_*\_}_{p-1\text{ stars}}\odot\underbrace{\_*\_\cdots\_*\_}_{n-p\text{ stars}}$$ where "$\odot$" designates the currently marked symbol. Conditional on this value of the place $p$, we wish to find the probability that the next place will equal $q$ (some value from $1$ through $n+k$; by the rules of the game, the next place must come after $p$, whence $q\ge p+1$). If we can find how many ways there are to intersperse the $k$ new cards in the blanks so that the next place equals $q$, then we can divide by the total number of ways to intersperse these cards (equal to $\binom{n+k}{k}$, as we have seen) to obtain the transition probability that the place changes from $p$ to $q$. (There will also be a transition probability for the place to disappear altogether when none of the new cards follow the marked card, but there is no need to compute this explicitly.) Let's update the diagram to reflect this situation: $$\underbrace{\_*\_*\_\cdots\_*\_}_{p-1\text{ stars}}\odot\underbrace{**\cdots*}_{s\text{ stars}}\ \vert\ \underbrace{\_*\_\cdots\_*\_}_{n-p-s\text{ stars}}$$ The vertical bar "$\vert$" shows where the first new card occurs after the marked card: no new cards may therefore appear between the $\odot$ and the $\vert$ (and therefore no slots are shown in that interval). We do not know how many stars there are in this interval, so I have just called it $s$ (which may be zero) The unknown $s$ will disappear once we find the relationship between it and $q$. Suppose, then, we intersperse $j$ new cards around the stars before the $\odot$ and then--independently of that--we intersperse the remaining $k-j-1$ new cards around the stars after the $\vert$. There are $$\tau_{n,k}(s,p) = \binom{(p-1)+j}{j}\binom{(n-p-s) + (k-j)-1}{k-j-1}$$ ways to do this. Notice, though--this is the trickiest part of the analysis--that the place of $\vert$ equals $p+s+j+1$ because There are $p$ "old" cards at or before the mark. There are $s$ old cards after the mark but before $\vert$. There are $j$ new cards before the mark. There is the new card represented by $\vert$ itself. Thus, $\tau_{n,k}(s,p)$ gives us information about the transition from place $p$ to place $q=p+s+j+1$. When we track this information carefully for all possible values of $s$, and sum over all these (disjoint) possibilities, we obtain the conditional probability of place $q$ following place $p$, $${\Pr}_{n,k}(q|p) = \left(\sum_j \binom{p-1+j}{j}\binom{n+k-q}{k-j-1}\right) / \binom{n+k}{k}$$ where the sum starts at $j=\max(0, q-(n+1))$ and ends at $j=\min(k-1, q-(p+1)$. (The variable length of this sum suggests there is unlikely to be a closed formula for it as a function of $n, k, q,$ and $p$, except in special cases.) The algorithm Initially there is probability $1$ that the place will be $1$ and probability $0$ it will have any other possible value in $2, 3, \ldots, k_1$. This can be represented by a vector $p_1 = (1, 0, \ldots, 0)$. After interspersing the next $k_2$ cards, the vector $p_1$ is updated to $p_2$ by multiplying it (on the left) by the transition matrix $(\Pr_{k_1,k_2}(q|p), 1\le p\le k_1, 1\le q\le k_2)$. This is repeated until all $k_1+k_2+\cdots+k_m$ cards have been placed. At each stage $j$, the sum of the entries in the probability vector $p_j$ is the chance that some card has been marked. Whatever remains to make the value equal to $1$ therefore is the chance that no card is left marked after step $j$. The successive differences in these values therefore give us the probability that we could not find a card of type $j$ to mark: that is the probability distribution of the value of the card we were looking for when the deck runs out at the end of the game. Implementation The following R code implements the algorithm. It parallels the preceding discussion. First, calculation of the transition probabilities is performed by t.matrix (without normalization with the division by $\binom{n+k}{k}$, making it easier to track the calculations when testing the code): t.matrix <- function(q, p, n, k) { j <- max(0, q-(n+1)):min(k-1, q-(p+1)) return (sum(choose(p-1+j,j) * choose(n+k-q, k-1-j)) } This is used by transition to update $p_{j-1}$ to $p_j$. It calculates the transition matrix and performs the multiplication. It also takes care of computing the initial vector $p_1$ if the argument p is an empty vector: # # `p` is the place distribution: p[i] is the chance the place is `i`. # transition <- function(p, k) { n <- length(p) if (n==0) { q <- c(1, rep(0, k-1)) } else { # # Construct the transition matrix. # t.mat <- matrix(0, nrow=n, ncol=(n+k)) #dimnames(t.mat) <- list(p=1:n, q=1:(n+k)) for (i in 1:n) { t.mat[i, ] <- c(rep(0, i), sapply((i+1):(n+k), function(q) t.matrix(q, i, n, k))) } # # Normalize and apply the transition matrix. # q <- as.vector(p %*% t.mat / choose(n+k, k)) } names(q) <- 1:(n+k) return (q) } We can now easily compute the non-mark probabilities at each stage for any deck: # # `k` is an array giving the numbers of each card in order; # e.g., k = rep(4, 13) for a standard deck. # # NB: the *complements* of the p-vectors are output. # game <- function(k) { p <- numeric(0) q <- sapply(k, function(i) 1 - sum(p <<- transition(p, i))) names(q) <- names(k) return (q) } Here they are for the standard deck: k <- rep(4, 13) names(k) <- c("A", 2:9, "T", "J", "Q", "K") (g <- game(k)) The output is A 2 3 4 5 6 7 8 9 T J Q K 0.00000000 0.01428571 0.09232323 0.25595013 0.46786622 0.66819134 0.81821790 0.91160622 0.96146102 0.98479430 0.99452614 0.99818922 0.99944610 According to the rules, if a king was marked then we would not look for any further cards: this means the value of $0.9994461$ has to be increased to $1$. Upon doing so, the differences give the distribution of the "number you will be on when the deck runs out": > g[13] <- 1; diff(g) 2 3 4 5 6 7 8 9 T J Q K 0.014285714 0.078037518 0.163626897 0.211916093 0.200325120 0.150026562 0.093388313 0.049854807 0.023333275 0.009731843 0.003663077 0.001810781 (Compare this to the output I report in a separate answer describing a Monte-Carlo simulation: they appear to be the same, up to expected amounts of random variation.) The expected value is immediate: > sum(diff(g) * 2:13) [1] 5.832589 All told, this required only a dozen lines or so of executable code. I have checked it against hand calculations for small values of $k$ (up to $3$). Thus, if any discrepancy becomes apparent between the code and the preceding analysis of the problem, trust the code (because the analysis may have typographical errors). Remarks Relationships to other sequences When there is one of each card, the distribution is a sequence of reciprocals of whole numbers: > 1/diff(game(rep(1,10))) [1] 2 3 8 30 144 840 5760 45360 403200 The value at place $i$ is $i! + (i-1)!$ (starting at place $i=1$). This is sequence A001048 in the Online Encyclopedia of Integer Sequences. Accordingly, we might hope for a closed formula for the decks with constant $k_i$ (the "suited" decks) that would generalize this sequence, which itself has some profound meanings. (For instance, it counts sizes of the largest conjugacy classes in permutation groups and is also related to trinomial coefficients.) (Unfortunately, the reciprocals in the generalization for $k\gt 1$ are not usually integers.) The game as a stochastic process Our analysis makes it clear that the initial $i$ coefficients of the vectors $p_j$, $j\ge i$, are constant. For example, let's track the output of game as it processes each group of cards: > sapply(1:13, function(i) game(rep(4,i))) [[1]] [1] 0 [[2]] [1] 0.00000000 0.01428571 [[3]] [1] 0.00000000 0.01428571 0.09232323 [[4]] [1] 0.00000000 0.01428571 0.09232323 0.25595013 ... [[13]] [1] 0.00000000 0.01428571 0.09232323 0.25595013 0.46786622 0.66819134 0.81821790 0.91160622 0.96146102 0.98479430 0.99452614 0.99818922 0.99944610 For instance, the second value of the final vector (describing the results with a full deck of 52 cards) already appeared after the second group was processed (and equals $1/\binom{8}{4}=1/70$). Thus, if you want information only about the marks up through the $j^\text{th}$ card value, you only have to perform the calculation for a deck of $k_1+k_2+\cdots+k_j$ cards. Because the chance of not marking a card of value $j$ is getting quickly close to $1$ as $j$ increases, after $13$ types of cards in four suits we have almost reached a limiting value for the expectation. Indeed, the limiting value is approximately $5.833355$ (computed for a deck of $4 \times 32$ cards, at which point double precision rounding error prevents going any further). Timing Looking at the algorithm applied to the $m$-vector $(k,k, \ldots, k)$, we see its timing should be proportional to $k^2$ and--using a crude upper bound--not any worse than proportional to $m^3$. By timing all calculations for $k=1$ through $7$ and $n=10$ through $30$, and analyzing only those taking relatively long times ($1/2$ second or longer), I estimate the computation time is approximately $O(k^2 n^{2.9})$, supporting this upper-bound assessment. One use of these asymptotics is to project calculation times for larger problems. For instance, seeing that the case $k=4, n=30$ takes about $1.31$ seconds, we would estimate that the (very interesting) case $k=1, n=100$ would take about $1.31(1/4)^2(100/30)^{2.9}\approx 2.7$ seconds. (It actually takes $2.87$ seconds.)
Expected number I will be on after drawing cards until I get an ace, 2, 3, and so forth
There is an exact answer (in the form of a matrix product, presented in point 4 below). A reasonably efficient algorithm to compute it exists, deriving from these observations: A random shuffle of $
Expected number I will be on after drawing cards until I get an ace, 2, 3, and so forth There is an exact answer (in the form of a matrix product, presented in point 4 below). A reasonably efficient algorithm to compute it exists, deriving from these observations: A random shuffle of $N+k$ cards can be generated by randomly shuffling $N$ cards and then randomly interspersing the remaining $k$ cards within them. By shuffling only the aces, and then (applying the first observation) interspersing the twos, then the threes, and so on, this problem can be viewed as a chain of thirteen steps. We do need to keep track of more than the value of the card we are seeking. When doing this, however, we don't need to account for the position of the mark relative to all the cards, but only its position relative to cards of equal or smaller value. Imagine placing a mark on the first ace, and then marking the first two found after it, and so on. (If at any stage the deck runs out without displaying the card we are currently seeking, we will leave all cards unmarked.) Let the "place" of each mark (when it exists) be the number of cards of equal or lower value that were dealt when the mark was made (including the marked card itself). The places contain all the essential information. The place after the $i^\text{th}$ mark is made is a random number. For a given deck, the sequence of these places forms a stochastic process. It in fact is a Markov process (with variable transition matrix). An exact answer can therefore be calculated from twelve matrix multiplications. Using these ideas, this machine obtains a value of $5.8325885529019965$ (computing in double precision floating point) in $1/9$ second. This approximation of the exact value $$\frac{1982600579265894785026945331968939023522542569}{339917784579447928182134345929899510000000000}$$ is accurate to all digits shown. The rest of this post provides details, presents a working implementation (in R), and concludes with some comments about the question and the efficiency of the solution. Generating random shuffles of a deck It is actually clearer conceptually and no more complicated mathematically to consider a "deck" (aka multiset) of $N = k_1+k_2+\cdots+k_m$ cards of which there are $k_1$ of the lowest denomination, $k_2$ of the next lowest, and so on. (The question as asked concerns the deck determined by the $13$-vector $(4,4,\ldots,4)$.) A "random shuffle" of $N$ cards is one permutation taken uniformly and randomly from the $N! = N\times(N-1)\times\cdots\times 2\times 1$ permutations of the $N$ cards. These shuffles fall into groups of equivalent configurations because permuting the $k_1$ "aces" among themselves changes nothing, permuting the $k_2$ "twos" among themselves also changes nothing, and so on. Therefore each group of permutations that look identical when the suits of the cards are ignored contains $k_1!\times k_2!\times \cdots \times k_m!$ permutations. These groups, whose number therefore is given by the multinomial coefficient $$\binom{N}{k_1,k_2,\ldots,k_m} = \frac{N!}{k_1!k_2!\cdots k_m!},$$ are called "combinations" of the deck. There is another way to count the combinations. The first $k_1$ cards can form only $k_1!/k_1! = 1$ combination. They leave $k_1+1$ "slots" between and around them in which the next $k_2$ cards can be placed. We could indicate this with a diagram where "$*$" designates one of the $k_1$ cards and "$\_$" designates a slot that can hold between $0$ and $k_2$ additional cards: $$\underbrace{\_*\_*\_\cdots\_*\_}_{k_1\text{ stars}}$$ When $k_2$ additional cards are interspersed, the pattern of stars and new cards partitions the $k_1+k_2$ cards into two subsets. The number of distinct such subsets is $\binom{k_1+k_2}{k_1,k_2} = \frac{(k_1+k_2)!}{k_1!k_2!}$. Repeating this procedure with $k_3$ "threes," we find there are $\binom{(k_1+k_2)+k_3}{k_1+k_2,k_3}= \frac{(k_1+k_2+k_3)!}{(k_1+k_2)!k_3!}$ ways to intersperse them among the first $k_1+k_2$ cards. Therefore the total number of distinct ways to arrange the first $k_1+k_2+k_3$ cards in this manner equals $$1\times\frac{(k_1+k_2)!}{k_1!k_2!}\times\frac{(k_1+k_2+k_3)!}{(k_1+k_2)!k_3!} = \frac{(k_1+k_2+k_3)!}{k_1!k_2!k_3!}.$$ After finishing the last $k_n$ cards and continuing to multiply these telescoping fractions, we find that the number of distinct combinations obtained equals the total number of combinations as previously counted, $\binom{N}{k_1,k_2,\ldots,k_m}$. Therefore we have overlooked no combinations. That means this sequential process of shuffling the cards correctly captures the probabilities of each combination, assuming that at each stage each possible distinct way of interspersing the new cards among the old is taken with uniformly equal probability. The place process Initially, there are $k_1$ aces and obviously the very first is marked. At later stages there are $n = k_1 + k_2 + \cdots + k_{j-1}$ cards, the place (if a marked card exists) equals $p$ (some value from $1$ through $n$), and we are about to intersperse $k=k_j$ cards around them. We can visualize this with a diagram like $$\underbrace{\_*\_*\_\cdots\_*\_}_{p-1\text{ stars}}\odot\underbrace{\_*\_\cdots\_*\_}_{n-p\text{ stars}}$$ where "$\odot$" designates the currently marked symbol. Conditional on this value of the place $p$, we wish to find the probability that the next place will equal $q$ (some value from $1$ through $n+k$; by the rules of the game, the next place must come after $p$, whence $q\ge p+1$). If we can find how many ways there are to intersperse the $k$ new cards in the blanks so that the next place equals $q$, then we can divide by the total number of ways to intersperse these cards (equal to $\binom{n+k}{k}$, as we have seen) to obtain the transition probability that the place changes from $p$ to $q$. (There will also be a transition probability for the place to disappear altogether when none of the new cards follow the marked card, but there is no need to compute this explicitly.) Let's update the diagram to reflect this situation: $$\underbrace{\_*\_*\_\cdots\_*\_}_{p-1\text{ stars}}\odot\underbrace{**\cdots*}_{s\text{ stars}}\ \vert\ \underbrace{\_*\_\cdots\_*\_}_{n-p-s\text{ stars}}$$ The vertical bar "$\vert$" shows where the first new card occurs after the marked card: no new cards may therefore appear between the $\odot$ and the $\vert$ (and therefore no slots are shown in that interval). We do not know how many stars there are in this interval, so I have just called it $s$ (which may be zero) The unknown $s$ will disappear once we find the relationship between it and $q$. Suppose, then, we intersperse $j$ new cards around the stars before the $\odot$ and then--independently of that--we intersperse the remaining $k-j-1$ new cards around the stars after the $\vert$. There are $$\tau_{n,k}(s,p) = \binom{(p-1)+j}{j}\binom{(n-p-s) + (k-j)-1}{k-j-1}$$ ways to do this. Notice, though--this is the trickiest part of the analysis--that the place of $\vert$ equals $p+s+j+1$ because There are $p$ "old" cards at or before the mark. There are $s$ old cards after the mark but before $\vert$. There are $j$ new cards before the mark. There is the new card represented by $\vert$ itself. Thus, $\tau_{n,k}(s,p)$ gives us information about the transition from place $p$ to place $q=p+s+j+1$. When we track this information carefully for all possible values of $s$, and sum over all these (disjoint) possibilities, we obtain the conditional probability of place $q$ following place $p$, $${\Pr}_{n,k}(q|p) = \left(\sum_j \binom{p-1+j}{j}\binom{n+k-q}{k-j-1}\right) / \binom{n+k}{k}$$ where the sum starts at $j=\max(0, q-(n+1))$ and ends at $j=\min(k-1, q-(p+1)$. (The variable length of this sum suggests there is unlikely to be a closed formula for it as a function of $n, k, q,$ and $p$, except in special cases.) The algorithm Initially there is probability $1$ that the place will be $1$ and probability $0$ it will have any other possible value in $2, 3, \ldots, k_1$. This can be represented by a vector $p_1 = (1, 0, \ldots, 0)$. After interspersing the next $k_2$ cards, the vector $p_1$ is updated to $p_2$ by multiplying it (on the left) by the transition matrix $(\Pr_{k_1,k_2}(q|p), 1\le p\le k_1, 1\le q\le k_2)$. This is repeated until all $k_1+k_2+\cdots+k_m$ cards have been placed. At each stage $j$, the sum of the entries in the probability vector $p_j$ is the chance that some card has been marked. Whatever remains to make the value equal to $1$ therefore is the chance that no card is left marked after step $j$. The successive differences in these values therefore give us the probability that we could not find a card of type $j$ to mark: that is the probability distribution of the value of the card we were looking for when the deck runs out at the end of the game. Implementation The following R code implements the algorithm. It parallels the preceding discussion. First, calculation of the transition probabilities is performed by t.matrix (without normalization with the division by $\binom{n+k}{k}$, making it easier to track the calculations when testing the code): t.matrix <- function(q, p, n, k) { j <- max(0, q-(n+1)):min(k-1, q-(p+1)) return (sum(choose(p-1+j,j) * choose(n+k-q, k-1-j)) } This is used by transition to update $p_{j-1}$ to $p_j$. It calculates the transition matrix and performs the multiplication. It also takes care of computing the initial vector $p_1$ if the argument p is an empty vector: # # `p` is the place distribution: p[i] is the chance the place is `i`. # transition <- function(p, k) { n <- length(p) if (n==0) { q <- c(1, rep(0, k-1)) } else { # # Construct the transition matrix. # t.mat <- matrix(0, nrow=n, ncol=(n+k)) #dimnames(t.mat) <- list(p=1:n, q=1:(n+k)) for (i in 1:n) { t.mat[i, ] <- c(rep(0, i), sapply((i+1):(n+k), function(q) t.matrix(q, i, n, k))) } # # Normalize and apply the transition matrix. # q <- as.vector(p %*% t.mat / choose(n+k, k)) } names(q) <- 1:(n+k) return (q) } We can now easily compute the non-mark probabilities at each stage for any deck: # # `k` is an array giving the numbers of each card in order; # e.g., k = rep(4, 13) for a standard deck. # # NB: the *complements* of the p-vectors are output. # game <- function(k) { p <- numeric(0) q <- sapply(k, function(i) 1 - sum(p <<- transition(p, i))) names(q) <- names(k) return (q) } Here they are for the standard deck: k <- rep(4, 13) names(k) <- c("A", 2:9, "T", "J", "Q", "K") (g <- game(k)) The output is A 2 3 4 5 6 7 8 9 T J Q K 0.00000000 0.01428571 0.09232323 0.25595013 0.46786622 0.66819134 0.81821790 0.91160622 0.96146102 0.98479430 0.99452614 0.99818922 0.99944610 According to the rules, if a king was marked then we would not look for any further cards: this means the value of $0.9994461$ has to be increased to $1$. Upon doing so, the differences give the distribution of the "number you will be on when the deck runs out": > g[13] <- 1; diff(g) 2 3 4 5 6 7 8 9 T J Q K 0.014285714 0.078037518 0.163626897 0.211916093 0.200325120 0.150026562 0.093388313 0.049854807 0.023333275 0.009731843 0.003663077 0.001810781 (Compare this to the output I report in a separate answer describing a Monte-Carlo simulation: they appear to be the same, up to expected amounts of random variation.) The expected value is immediate: > sum(diff(g) * 2:13) [1] 5.832589 All told, this required only a dozen lines or so of executable code. I have checked it against hand calculations for small values of $k$ (up to $3$). Thus, if any discrepancy becomes apparent between the code and the preceding analysis of the problem, trust the code (because the analysis may have typographical errors). Remarks Relationships to other sequences When there is one of each card, the distribution is a sequence of reciprocals of whole numbers: > 1/diff(game(rep(1,10))) [1] 2 3 8 30 144 840 5760 45360 403200 The value at place $i$ is $i! + (i-1)!$ (starting at place $i=1$). This is sequence A001048 in the Online Encyclopedia of Integer Sequences. Accordingly, we might hope for a closed formula for the decks with constant $k_i$ (the "suited" decks) that would generalize this sequence, which itself has some profound meanings. (For instance, it counts sizes of the largest conjugacy classes in permutation groups and is also related to trinomial coefficients.) (Unfortunately, the reciprocals in the generalization for $k\gt 1$ are not usually integers.) The game as a stochastic process Our analysis makes it clear that the initial $i$ coefficients of the vectors $p_j$, $j\ge i$, are constant. For example, let's track the output of game as it processes each group of cards: > sapply(1:13, function(i) game(rep(4,i))) [[1]] [1] 0 [[2]] [1] 0.00000000 0.01428571 [[3]] [1] 0.00000000 0.01428571 0.09232323 [[4]] [1] 0.00000000 0.01428571 0.09232323 0.25595013 ... [[13]] [1] 0.00000000 0.01428571 0.09232323 0.25595013 0.46786622 0.66819134 0.81821790 0.91160622 0.96146102 0.98479430 0.99452614 0.99818922 0.99944610 For instance, the second value of the final vector (describing the results with a full deck of 52 cards) already appeared after the second group was processed (and equals $1/\binom{8}{4}=1/70$). Thus, if you want information only about the marks up through the $j^\text{th}$ card value, you only have to perform the calculation for a deck of $k_1+k_2+\cdots+k_j$ cards. Because the chance of not marking a card of value $j$ is getting quickly close to $1$ as $j$ increases, after $13$ types of cards in four suits we have almost reached a limiting value for the expectation. Indeed, the limiting value is approximately $5.833355$ (computed for a deck of $4 \times 32$ cards, at which point double precision rounding error prevents going any further). Timing Looking at the algorithm applied to the $m$-vector $(k,k, \ldots, k)$, we see its timing should be proportional to $k^2$ and--using a crude upper bound--not any worse than proportional to $m^3$. By timing all calculations for $k=1$ through $7$ and $n=10$ through $30$, and analyzing only those taking relatively long times ($1/2$ second or longer), I estimate the computation time is approximately $O(k^2 n^{2.9})$, supporting this upper-bound assessment. One use of these asymptotics is to project calculation times for larger problems. For instance, seeing that the case $k=4, n=30$ takes about $1.31$ seconds, we would estimate that the (very interesting) case $k=1, n=100$ would take about $1.31(1/4)^2(100/30)^{2.9}\approx 2.7$ seconds. (It actually takes $2.87$ seconds.)
Expected number I will be on after drawing cards until I get an ace, 2, 3, and so forth There is an exact answer (in the form of a matrix product, presented in point 4 below). A reasonably efficient algorithm to compute it exists, deriving from these observations: A random shuffle of $
24,106
Expected number I will be on after drawing cards until I get an ace, 2, 3, and so forth
Hacked a simple Monte Carlo in Perl and found approximately $5.8329$. #!/usr/bin/perl use strict; my @deck = (1..13) x 4; my $N = 100000; # Monte Carlo iterations. my $mean = 0; for (my $i = 1; $i <= $N; $i++) { my @d = @deck; fisher_yates_shuffle(\@d); my $last = 0; foreach my $c (@d) { if ($c == $last + 1) { $last = $c } } $mean += ($last + 1) / $N; } print $mean, "\n"; sub fisher_yates_shuffle { my $array = shift; my $i = @$array; while (--$i) { my $j = int rand($i + 1); @$array[$i, $j] = @$array[$j, $i]; } }
Expected number I will be on after drawing cards until I get an ace, 2, 3, and so forth
Hacked a simple Monte Carlo in Perl and found approximately $5.8329$. #!/usr/bin/perl use strict; my @deck = (1..13) x 4; my $N = 100000; # Monte Carlo iterations. my $mean = 0; for (my $i = 1;
Expected number I will be on after drawing cards until I get an ace, 2, 3, and so forth Hacked a simple Monte Carlo in Perl and found approximately $5.8329$. #!/usr/bin/perl use strict; my @deck = (1..13) x 4; my $N = 100000; # Monte Carlo iterations. my $mean = 0; for (my $i = 1; $i <= $N; $i++) { my @d = @deck; fisher_yates_shuffle(\@d); my $last = 0; foreach my $c (@d) { if ($c == $last + 1) { $last = $c } } $mean += ($last + 1) / $N; } print $mean, "\n"; sub fisher_yates_shuffle { my $array = shift; my $i = @$array; while (--$i) { my $j = int rand($i + 1); @$array[$i, $j] = @$array[$j, $i]; } }
Expected number I will be on after drawing cards until I get an ace, 2, 3, and so forth Hacked a simple Monte Carlo in Perl and found approximately $5.8329$. #!/usr/bin/perl use strict; my @deck = (1..13) x 4; my $N = 100000; # Monte Carlo iterations. my $mean = 0; for (my $i = 1;
24,107
Criteria for selecting the "best" model in a Hidden Markov Model
I'm assuming here that your output variable is categorical, though that may not be the case. Typically though, when I've seen HMM's used, the number of states is known in advance rather than selected through tuning. Usually they correspond to some well-understood variable that happens to not be observed. But that doesn't mean you can't experiment with it. The danger in using BIC (and AIC) though is that the k value for the number of free parameters in the model increases quadratically with the number of states because you have the transition probability matrix with Px(P-1) parameters (for P states) and the output probabilities for each category of the output given each state. So if the AIC and BIC are being calculated properly, the k should be going up fast. If you have enough data, I would recommend a softer method of tuning the number of states like testing on a holdout sample. You might also want to just look at the likelihood statistic and visually see at what point it plateaus. Also if your data is large, keep in mind that this will push the BIC to a smaller model.
Criteria for selecting the "best" model in a Hidden Markov Model
I'm assuming here that your output variable is categorical, though that may not be the case. Typically though, when I've seen HMM's used, the number of states is known in advance rather than selected
Criteria for selecting the "best" model in a Hidden Markov Model I'm assuming here that your output variable is categorical, though that may not be the case. Typically though, when I've seen HMM's used, the number of states is known in advance rather than selected through tuning. Usually they correspond to some well-understood variable that happens to not be observed. But that doesn't mean you can't experiment with it. The danger in using BIC (and AIC) though is that the k value for the number of free parameters in the model increases quadratically with the number of states because you have the transition probability matrix with Px(P-1) parameters (for P states) and the output probabilities for each category of the output given each state. So if the AIC and BIC are being calculated properly, the k should be going up fast. If you have enough data, I would recommend a softer method of tuning the number of states like testing on a holdout sample. You might also want to just look at the likelihood statistic and visually see at what point it plateaus. Also if your data is large, keep in mind that this will push the BIC to a smaller model.
Criteria for selecting the "best" model in a Hidden Markov Model I'm assuming here that your output variable is categorical, though that may not be the case. Typically though, when I've seen HMM's used, the number of states is known in advance rather than selected
24,108
Penalty value in changepoint analysis
If you are wanting to test "significance" then I suggest you use the Asymptotic penalty option, i.e. penalty='Asymptotic' and pen.value=0.05 for 95% confidence. This automatically sets the penalty based on the cost function you are using. I find that this works well for smaller data sets <1000 but not too small <100. If you want to use the manual penalty option then the simplest rule is that a lower penalty value results in more changepoints identified. It is up to the user to decide what value is appropriate. Personally, I use an "elbow" plot to decide this. The elbow plot is constructed by varying the penalty value and plotting the number of changepoints identified against the penalty used. This will show a rapid decrease (getting rid of the changepoints induced by noise) and this will slow until it then goes to 0 changes. You want to choose a penalty that is after the rapid decrease but not too much after as you will start losing "true" changepoints. Apologies for the condition of the graph but I had to convert it from a PDF to jpeg to be able to upload it here.
Penalty value in changepoint analysis
If you are wanting to test "significance" then I suggest you use the Asymptotic penalty option, i.e. penalty='Asymptotic' and pen.value=0.05 for 95% confidence. This automatically sets the penalty ba
Penalty value in changepoint analysis If you are wanting to test "significance" then I suggest you use the Asymptotic penalty option, i.e. penalty='Asymptotic' and pen.value=0.05 for 95% confidence. This automatically sets the penalty based on the cost function you are using. I find that this works well for smaller data sets <1000 but not too small <100. If you want to use the manual penalty option then the simplest rule is that a lower penalty value results in more changepoints identified. It is up to the user to decide what value is appropriate. Personally, I use an "elbow" plot to decide this. The elbow plot is constructed by varying the penalty value and plotting the number of changepoints identified against the penalty used. This will show a rapid decrease (getting rid of the changepoints induced by noise) and this will slow until it then goes to 0 changes. You want to choose a penalty that is after the rapid decrease but not too much after as you will start losing "true" changepoints. Apologies for the condition of the graph but I had to convert it from a PDF to jpeg to be able to upload it here.
Penalty value in changepoint analysis If you are wanting to test "significance" then I suggest you use the Asymptotic penalty option, i.e. penalty='Asymptotic' and pen.value=0.05 for 95% confidence. This automatically sets the penalty ba
24,109
Penalty value in changepoint analysis
This tutorial made by the original author of the package (and taking some time playing with the arguments of cpt.mean and looking through the doc) helped me understand how the function behaves. http://members.cbio.mines-paristech.fr/~thocking/change-tutorial/RK-CptWorkshop.html More tutorials here: https://github.com/tdhock/change-tutorial I hope this helps.
Penalty value in changepoint analysis
This tutorial made by the original author of the package (and taking some time playing with the arguments of cpt.mean and looking through the doc) helped me understand how the function behaves. http:/
Penalty value in changepoint analysis This tutorial made by the original author of the package (and taking some time playing with the arguments of cpt.mean and looking through the doc) helped me understand how the function behaves. http://members.cbio.mines-paristech.fr/~thocking/change-tutorial/RK-CptWorkshop.html More tutorials here: https://github.com/tdhock/change-tutorial I hope this helps.
Penalty value in changepoint analysis This tutorial made by the original author of the package (and taking some time playing with the arguments of cpt.mean and looking through the doc) helped me understand how the function behaves. http:/
24,110
Fixed vs Random Effects
Fixed effects models and random effects models ask different questions of the data. Specifying a set of group-level dummy variables essentially controls for all group-level unobserved heterogeneity in the average response, leaving your estimates to reflect only variability within units. Random effects models start with the assumption that there is a meta-population of (whatever effect), and that your sample reflects many draws from that population. So rather than anchoring your results around heterogeneous intercepts, your data will be used to elucidate the parameters of that (usually normal) distribution from which your data were supposedly drawn. It is often said that fixed effects models are good for conducting inference on the data that you have, and that random effects models are good for trying to conduct inference on some larger population from which your data is a random sample. When I learned about fixed effects models, they were motivated using error components and panel data. Take multiple observations of a given unit, and a random treatment in time $t$. $$y_{it} = \alpha_i + \beta T_{it} + \epsilon_{it}$$ You can break your error term out into that component of your error term that varies in time, and one that doesn't: $$y_{it} = \alpha_i + \beta T_{it} + e_i + u_{it}$$ Now subtract the groupwise mean from both sides: $$y_{it} - \bar y_i = \alpha_i - \bar \alpha_i + \beta \left(T_{it}- \bar T_i\right) + e_i - \bar e_i+ u_{it}- \bar u_it$$ Things that aren't subscripted by $t$ come out of the equation by basic subtraction -- which is to say that the average over time is the same as it is at any time if it never changes. This includes your non-time-varying component of your error term. Thus your estimates are unconfounded by time-invariant heterogeneity. This doesn't quite work for a random effects model -- your non-$t$-indexed variables won't be sopped up by that transformation (the "within" transformation). As such, you can draw inference on the effects of things that don't vary within group. In the real world, such things have importance. Thus, random effects are good for "modeling the data", while fixed effects models are good for getting closer to unbiased estimates of particular terms. With a random effects model, you can't make the claim to have removed that $e_i$ entirely. In this example, time is the grouping variable. In your example, it is DID. (i.e.: it generalizes)
Fixed vs Random Effects
Fixed effects models and random effects models ask different questions of the data. Specifying a set of group-level dummy variables essentially controls for all group-level unobserved heterogeneity i
Fixed vs Random Effects Fixed effects models and random effects models ask different questions of the data. Specifying a set of group-level dummy variables essentially controls for all group-level unobserved heterogeneity in the average response, leaving your estimates to reflect only variability within units. Random effects models start with the assumption that there is a meta-population of (whatever effect), and that your sample reflects many draws from that population. So rather than anchoring your results around heterogeneous intercepts, your data will be used to elucidate the parameters of that (usually normal) distribution from which your data were supposedly drawn. It is often said that fixed effects models are good for conducting inference on the data that you have, and that random effects models are good for trying to conduct inference on some larger population from which your data is a random sample. When I learned about fixed effects models, they were motivated using error components and panel data. Take multiple observations of a given unit, and a random treatment in time $t$. $$y_{it} = \alpha_i + \beta T_{it} + \epsilon_{it}$$ You can break your error term out into that component of your error term that varies in time, and one that doesn't: $$y_{it} = \alpha_i + \beta T_{it} + e_i + u_{it}$$ Now subtract the groupwise mean from both sides: $$y_{it} - \bar y_i = \alpha_i - \bar \alpha_i + \beta \left(T_{it}- \bar T_i\right) + e_i - \bar e_i+ u_{it}- \bar u_it$$ Things that aren't subscripted by $t$ come out of the equation by basic subtraction -- which is to say that the average over time is the same as it is at any time if it never changes. This includes your non-time-varying component of your error term. Thus your estimates are unconfounded by time-invariant heterogeneity. This doesn't quite work for a random effects model -- your non-$t$-indexed variables won't be sopped up by that transformation (the "within" transformation). As such, you can draw inference on the effects of things that don't vary within group. In the real world, such things have importance. Thus, random effects are good for "modeling the data", while fixed effects models are good for getting closer to unbiased estimates of particular terms. With a random effects model, you can't make the claim to have removed that $e_i$ entirely. In this example, time is the grouping variable. In your example, it is DID. (i.e.: it generalizes)
Fixed vs Random Effects Fixed effects models and random effects models ask different questions of the data. Specifying a set of group-level dummy variables essentially controls for all group-level unobserved heterogeneity i
24,111
Fixed vs Random Effects
1) It is appropriate to make the comparison, just not with those two models. You would want to compare: GLM <- glm(remission~Age+Married+IL6, data=hdp, family=binomial) with GLMM <- glmer(remission~Age+Married+IL6+(1|DID), data=hdp, family=binomial) and you can do this with an anova: anova(GLM, GLMM) (Not sure if this will work with the glm and glmer results, as they might be different R objects. You might have to use two functions that have comparable return objects, like lme and gls, or do the anova yourself.) The anova will do a log-likelihood ratio test to see if the addition of the random doctor effect is significant. You would need to divide that p-value by 2 before declaring significance because you are testing the null hypothesis that the random doctor effect is 0, and 0 is on the boundary of the parameter space for a variance (the actual distribution you are using in the test is a mixture of the $\chi^2_0$ and $\chi^2_1$ distribution -- but I'm near the boundary of my own ignorance at this point). For me, the best book for understanding the process of nested model building and hypothesis testing has been West, Welsh, and Galecki (2007) Linear Mixed Models: A practical guide. They go through everything step by step. 2) If you have multiple observations per patient you would also add a random effect for patient. Then to test the relative importance of patience vs. doctor you could look at the predictive effects of patient vs. the predictive effects for doctor. The random effects terms for each will quantify the amount of variance between patients and between doctors, if that is a question you are interested in. (Someone please correct me if I'm wrong!)
Fixed vs Random Effects
1) It is appropriate to make the comparison, just not with those two models. You would want to compare: GLM <- glm(remission~Age+Married+IL6, data=hdp, family=binomial) with GLMM <- glmer(remission~
Fixed vs Random Effects 1) It is appropriate to make the comparison, just not with those two models. You would want to compare: GLM <- glm(remission~Age+Married+IL6, data=hdp, family=binomial) with GLMM <- glmer(remission~Age+Married+IL6+(1|DID), data=hdp, family=binomial) and you can do this with an anova: anova(GLM, GLMM) (Not sure if this will work with the glm and glmer results, as they might be different R objects. You might have to use two functions that have comparable return objects, like lme and gls, or do the anova yourself.) The anova will do a log-likelihood ratio test to see if the addition of the random doctor effect is significant. You would need to divide that p-value by 2 before declaring significance because you are testing the null hypothesis that the random doctor effect is 0, and 0 is on the boundary of the parameter space for a variance (the actual distribution you are using in the test is a mixture of the $\chi^2_0$ and $\chi^2_1$ distribution -- but I'm near the boundary of my own ignorance at this point). For me, the best book for understanding the process of nested model building and hypothesis testing has been West, Welsh, and Galecki (2007) Linear Mixed Models: A practical guide. They go through everything step by step. 2) If you have multiple observations per patient you would also add a random effect for patient. Then to test the relative importance of patience vs. doctor you could look at the predictive effects of patient vs. the predictive effects for doctor. The random effects terms for each will quantify the amount of variance between patients and between doctors, if that is a question you are interested in. (Someone please correct me if I'm wrong!)
Fixed vs Random Effects 1) It is appropriate to make the comparison, just not with those two models. You would want to compare: GLM <- glm(remission~Age+Married+IL6, data=hdp, family=binomial) with GLMM <- glmer(remission~
24,112
Fixed vs Random Effects
The models are very different. The glm model is addressing the overall reduction in deviance (from a null-model) when all of the doctorID effects are being estimated and and are being assigned parameter estimates. You notice, of course, that Age, Married, and IL6 all have the same Wald statistics in the two model, right? My understanding (not a highly refined one I will admit) is that mixed model is treating the doctorIDs as nuisance factors or strata, namely "effects" that cannot be assumed to be drawn from any particular parent distribution. I see no reason to think that using a mixed model would improve your understanding of the "doctor-effect", quite the opposite in fact. If your interest were in the effects of Age, Married or IL6 I would have imagined that you would not be comparing AIC across those two models but rather across differences in AIC with removal of covariates of interest within the same modeling structure.
Fixed vs Random Effects
The models are very different. The glm model is addressing the overall reduction in deviance (from a null-model) when all of the doctorID effects are being estimated and and are being assigned paramet
Fixed vs Random Effects The models are very different. The glm model is addressing the overall reduction in deviance (from a null-model) when all of the doctorID effects are being estimated and and are being assigned parameter estimates. You notice, of course, that Age, Married, and IL6 all have the same Wald statistics in the two model, right? My understanding (not a highly refined one I will admit) is that mixed model is treating the doctorIDs as nuisance factors or strata, namely "effects" that cannot be assumed to be drawn from any particular parent distribution. I see no reason to think that using a mixed model would improve your understanding of the "doctor-effect", quite the opposite in fact. If your interest were in the effects of Age, Married or IL6 I would have imagined that you would not be comparing AIC across those two models but rather across differences in AIC with removal of covariates of interest within the same modeling structure.
Fixed vs Random Effects The models are very different. The glm model is addressing the overall reduction in deviance (from a null-model) when all of the doctorID effects are being estimated and and are being assigned paramet
24,113
What is the expected distribution of residuals in a generalized linear model?
What is the expected distribution of residuals? It varies with the model in ways that make this impossible to answer generally. For example, should the residuals be distributed normally? Not generally, no.
What is the expected distribution of residuals in a generalized linear model?
What is the expected distribution of residuals? It varies with the model in ways that make this impossible to answer generally. For example, should the residuals be distributed normally? Not general
What is the expected distribution of residuals in a generalized linear model? What is the expected distribution of residuals? It varies with the model in ways that make this impossible to answer generally. For example, should the residuals be distributed normally? Not generally, no.
What is the expected distribution of residuals in a generalized linear model? What is the expected distribution of residuals? It varies with the model in ways that make this impossible to answer generally. For example, should the residuals be distributed normally? Not general
24,114
What is the expected distribution of residuals in a generalized linear model?
There is a whole cottage industry centered around designing residuals for GLMs that are more symmetric or even approximately "normal" (i.e. Gaussian), e.g. Pearson residuals, Anscombe residuals, (adjusted) deviance residuals, etc. See for example Chapter 6 of James W. Hardin and Joseph M. Hilbe (2007) "Generalized Linear Models and Extensions" second edition. College Station, TX: Stata Press. If the dependent variable is discrete (an indicator variable or a count) then it is obviously very hard to make the expected distribution of the residuals exactly Gaussian. One thing you can do is repeatedly simulate new data under the assumption that your model is true, estimate your model using that simulated data and compute the residuals, and then compare your actual residuals with your simulated residuals. In Stata I would do this like so: sysuse nlsw88, clear glm wage i.union grade c.ttl_exp##c.ttl_exp, link(log) family(poisson) // collect which observations were used in estimation and the predicted mean gen byte touse = e(sample) predict double mu if touse // predict residuals predict resid if touse, anscombe // prepare variables for plotting a cumulative distribution function cumul resid, gen(c) // collect the graph command in the local macro `graph' local graph "twoway" // create 19 simulations: gen ysim = . forvalues i = 1/19 { replace ysim = rpoisson(mu) if touse glm ysim i.union grade c.ttl_exp##c.ttl_exp, link(log) family(poisson) predict resid`i' if touse, anscombe cumul resid`i', gen(c`i') local graph "`graph' line c`i' resid`i', sort lpattern(solid) lcolor(gs8) ||" } local graph "`graph' line c resid, sort lpattern(solid) lcolor(black) " // display the graph `graph' legend(order(20 "actual residuals" 1 "simulations"))
What is the expected distribution of residuals in a generalized linear model?
There is a whole cottage industry centered around designing residuals for GLMs that are more symmetric or even approximately "normal" (i.e. Gaussian), e.g. Pearson residuals, Anscombe residuals, (adju
What is the expected distribution of residuals in a generalized linear model? There is a whole cottage industry centered around designing residuals for GLMs that are more symmetric or even approximately "normal" (i.e. Gaussian), e.g. Pearson residuals, Anscombe residuals, (adjusted) deviance residuals, etc. See for example Chapter 6 of James W. Hardin and Joseph M. Hilbe (2007) "Generalized Linear Models and Extensions" second edition. College Station, TX: Stata Press. If the dependent variable is discrete (an indicator variable or a count) then it is obviously very hard to make the expected distribution of the residuals exactly Gaussian. One thing you can do is repeatedly simulate new data under the assumption that your model is true, estimate your model using that simulated data and compute the residuals, and then compare your actual residuals with your simulated residuals. In Stata I would do this like so: sysuse nlsw88, clear glm wage i.union grade c.ttl_exp##c.ttl_exp, link(log) family(poisson) // collect which observations were used in estimation and the predicted mean gen byte touse = e(sample) predict double mu if touse // predict residuals predict resid if touse, anscombe // prepare variables for plotting a cumulative distribution function cumul resid, gen(c) // collect the graph command in the local macro `graph' local graph "twoway" // create 19 simulations: gen ysim = . forvalues i = 1/19 { replace ysim = rpoisson(mu) if touse glm ysim i.union grade c.ttl_exp##c.ttl_exp, link(log) family(poisson) predict resid`i' if touse, anscombe cumul resid`i', gen(c`i') local graph "`graph' line c`i' resid`i', sort lpattern(solid) lcolor(gs8) ||" } local graph "`graph' line c resid, sort lpattern(solid) lcolor(black) " // display the graph `graph' legend(order(20 "actual residuals" 1 "simulations"))
What is the expected distribution of residuals in a generalized linear model? There is a whole cottage industry centered around designing residuals for GLMs that are more symmetric or even approximately "normal" (i.e. Gaussian), e.g. Pearson residuals, Anscombe residuals, (adju
24,115
Compute BIC clustering criterion (to validate clusters after K-means)
To calculate the BIC for the kmeans results, I have tested the following methods: The following formula is from: [ref2] The r code for above formula is: k3 <- kmeans(mt,3) intra.mean <- mean(k3$within) k10 <- kmeans(mt,10) centers <- k10$centers BIC <- function(mt,cls,intra.mean,centers){ x.centers <- apply(centers,2,function(y){ as.numeric(y)[cls] }) sum1 <- sum(((mt-x.centers)/intra.mean)**2) sum1 + NCOL(mt)*length(unique(cls))*log(NROW(mt)) } # the problem is when i using the above r code, the calculated BIC was monotone increasing. what's the reason? [ref2] Ramsey, S. A., et al. (2008). "Uncovering a macrophage transcriptional program by integrating evidence from motif scanning and expression dynamics." PLoS Comput Biol 4(3): e1000021. I have used the new formula from https://stackoverflow.com/questions/15839774/how-to-calculate-bic-for-k-means-clustering-in-r BIC2 <- function(fit){ m = ncol(fit$centers) n = length(fit$cluster) k = nrow(fit$centers) D = fit$tot.withinss return(data.frame(AIC = D + 2*m*k, BIC = D + log(n)*m*k)) } This method given the lowest BIC value at cluster number 155. using @ttnphns provided method, the corresponding R code as listed below. However, the problem is what the difference between Vc and V? And how to calculate the element-wise multiplication for two vectors with different length? BIC3 <- function(fit,mt){ Nc <- as.matrix(as.numeric(table(fit$cluster)),nc=1) Vc <- apply(mt,2,function(x){ tapply(x,fit$cluster,var) }) V <- matrix(rep(apply(mt,2,function(x){ var(x) }),length(Nc)),byrow=TRUE,nrow=length(Nc)) LL = -Nc * colSums( log(Vc + V)/2 ) ##how to calculate this? elementa-wise multiplication for two vectors with different length? BIC = -2 * rowSums(LL) + 2*K*P * log(NRoW(mt)) return(BIC) }
Compute BIC clustering criterion (to validate clusters after K-means)
To calculate the BIC for the kmeans results, I have tested the following methods: The following formula is from: [ref2] The r code for above formula is: k3 <- kmeans(mt,3) intra.mean <- mean(k3
Compute BIC clustering criterion (to validate clusters after K-means) To calculate the BIC for the kmeans results, I have tested the following methods: The following formula is from: [ref2] The r code for above formula is: k3 <- kmeans(mt,3) intra.mean <- mean(k3$within) k10 <- kmeans(mt,10) centers <- k10$centers BIC <- function(mt,cls,intra.mean,centers){ x.centers <- apply(centers,2,function(y){ as.numeric(y)[cls] }) sum1 <- sum(((mt-x.centers)/intra.mean)**2) sum1 + NCOL(mt)*length(unique(cls))*log(NROW(mt)) } # the problem is when i using the above r code, the calculated BIC was monotone increasing. what's the reason? [ref2] Ramsey, S. A., et al. (2008). "Uncovering a macrophage transcriptional program by integrating evidence from motif scanning and expression dynamics." PLoS Comput Biol 4(3): e1000021. I have used the new formula from https://stackoverflow.com/questions/15839774/how-to-calculate-bic-for-k-means-clustering-in-r BIC2 <- function(fit){ m = ncol(fit$centers) n = length(fit$cluster) k = nrow(fit$centers) D = fit$tot.withinss return(data.frame(AIC = D + 2*m*k, BIC = D + log(n)*m*k)) } This method given the lowest BIC value at cluster number 155. using @ttnphns provided method, the corresponding R code as listed below. However, the problem is what the difference between Vc and V? And how to calculate the element-wise multiplication for two vectors with different length? BIC3 <- function(fit,mt){ Nc <- as.matrix(as.numeric(table(fit$cluster)),nc=1) Vc <- apply(mt,2,function(x){ tapply(x,fit$cluster,var) }) V <- matrix(rep(apply(mt,2,function(x){ var(x) }),length(Nc)),byrow=TRUE,nrow=length(Nc)) LL = -Nc * colSums( log(Vc + V)/2 ) ##how to calculate this? elementa-wise multiplication for two vectors with different length? BIC = -2 * rowSums(LL) + 2*K*P * log(NRoW(mt)) return(BIC) }
Compute BIC clustering criterion (to validate clusters after K-means) To calculate the BIC for the kmeans results, I have tested the following methods: The following formula is from: [ref2] The r code for above formula is: k3 <- kmeans(mt,3) intra.mean <- mean(k3
24,116
Compute BIC clustering criterion (to validate clusters after K-means)
I don't use R but here is a schedule which I hope will help you to compute the value of BIC or AIC clustering criteria for any given clustering solution. This approach follows SPSS Algorithms Two-step cluster analysis (see the formulas there, starting from chapter "Number of clusters", then move to "Log-likelihood distance" where ksi, the log-likelihood, is defined). BIC (or AIC) is being computed based on the log-likelihood distance. I'm showing below computation for quantitative data only (the formula given in the SPSS document is more general and incorporates also categorical data; I'm discussing only its quantitative data "part"): X is data matrix, N objects x P quantitative variables. Y is column of length N designating cluster membership; clusters 1, 2,..., K. 1. Compute 1 x K row Nc showing number of objects in each cluster. 2. Compute P x K matrix Vc containing variances by clusters. Use denominator "n", not "n-1", to compute those, because there may be clusters with just one object. 3. Compute P x 1 column containing variances for the whole sample. Use "n-1" denominator. Then propagate the column to get P x K matrix V. 4. Compute log-likelihood LL, 1 x K row. LL = -Nc &* csum( ln(Vc + V)/2 ), where "&*" means usual, elementwise multiplication; "csum" means sum of elements within columns. 5. Compute BIC value. BIC = -2 * rsum(LL) + 2*K*P * ln(N), where "rsum" means sum of elements within row. 6. Also could compute AIC value. AIC = -2 * rsum(LL) + 4*K*P Note: By default SPSS TwoStep cluster procedure standardizes all quantitative variables, therefore V consists of just 1s, it is constant 1. V serves simply as an insurance against ln(0) case. AIC and BIC clustering criteria are used not only with K-means clustering. They may be useful for any clustering method which treats within-cluster density as within-cluster variance. Because AIC and BIC are to penalize for "excessive parameters", they unambiguously tend to prefer solutions with less clusters. "Less clusters more dissociated from one another" could be their motto. There can be various versions of BIC/AIC clustering criteria. The one I showed here uses Vc, within-cluster variances, as the principal term of the log-likelihood. Some other version, perhaps better suited for k-means clustering, might base the log-likelihood on the within-cluster sums-of-squares. The pdf version of the same SPSS document which I referred to. And here is finally the formulae themselves, corresponding to the above pseudocode and the document; it is taken from the description of the function (macro) I've written for SPSS users. If you have any suggestions to improve the formulae please post a comment or an answer.
Compute BIC clustering criterion (to validate clusters after K-means)
I don't use R but here is a schedule which I hope will help you to compute the value of BIC or AIC clustering criteria for any given clustering solution. This approach follows SPSS Algorithms Two-step
Compute BIC clustering criterion (to validate clusters after K-means) I don't use R but here is a schedule which I hope will help you to compute the value of BIC or AIC clustering criteria for any given clustering solution. This approach follows SPSS Algorithms Two-step cluster analysis (see the formulas there, starting from chapter "Number of clusters", then move to "Log-likelihood distance" where ksi, the log-likelihood, is defined). BIC (or AIC) is being computed based on the log-likelihood distance. I'm showing below computation for quantitative data only (the formula given in the SPSS document is more general and incorporates also categorical data; I'm discussing only its quantitative data "part"): X is data matrix, N objects x P quantitative variables. Y is column of length N designating cluster membership; clusters 1, 2,..., K. 1. Compute 1 x K row Nc showing number of objects in each cluster. 2. Compute P x K matrix Vc containing variances by clusters. Use denominator "n", not "n-1", to compute those, because there may be clusters with just one object. 3. Compute P x 1 column containing variances for the whole sample. Use "n-1" denominator. Then propagate the column to get P x K matrix V. 4. Compute log-likelihood LL, 1 x K row. LL = -Nc &* csum( ln(Vc + V)/2 ), where "&*" means usual, elementwise multiplication; "csum" means sum of elements within columns. 5. Compute BIC value. BIC = -2 * rsum(LL) + 2*K*P * ln(N), where "rsum" means sum of elements within row. 6. Also could compute AIC value. AIC = -2 * rsum(LL) + 4*K*P Note: By default SPSS TwoStep cluster procedure standardizes all quantitative variables, therefore V consists of just 1s, it is constant 1. V serves simply as an insurance against ln(0) case. AIC and BIC clustering criteria are used not only with K-means clustering. They may be useful for any clustering method which treats within-cluster density as within-cluster variance. Because AIC and BIC are to penalize for "excessive parameters", they unambiguously tend to prefer solutions with less clusters. "Less clusters more dissociated from one another" could be their motto. There can be various versions of BIC/AIC clustering criteria. The one I showed here uses Vc, within-cluster variances, as the principal term of the log-likelihood. Some other version, perhaps better suited for k-means clustering, might base the log-likelihood on the within-cluster sums-of-squares. The pdf version of the same SPSS document which I referred to. And here is finally the formulae themselves, corresponding to the above pseudocode and the document; it is taken from the description of the function (macro) I've written for SPSS users. If you have any suggestions to improve the formulae please post a comment or an answer.
Compute BIC clustering criterion (to validate clusters after K-means) I don't use R but here is a schedule which I hope will help you to compute the value of BIC or AIC clustering criteria for any given clustering solution. This approach follows SPSS Algorithms Two-step
24,117
Dealing with regression of unusually bounded response variable
Although I'm not entirely certain of what your problem with linear regression is I'm right now finishing an article about how to analyze bounded outcomes. Since I'm not familiar with Beta regression perhaps someone else will answer that option. By your question I understand that you get predictions outside the boundaries. In this case I would go for logistic quantile regression. Quantile regression is a very neat alternative to regular linear regression. You can look at different quantiles and get a much better picture of your data than what's possible with regular linear regression. It is also has no assumptions regarding distribution1. Transformation of a variable can often cause funny effects on linear regression, for instance you have a significance in the logistic transformation but that doesn't translate into the regular value. This is not the case with quantiles, the median is always the median regardless of the transformation function. This allows you to transform back and forth without distorting anything. Prof. Bottai suggested this approach to bounded outcomes2, its an excellent method if you want to do individual predictions but it has some issues when you wan't to look at the beta's and interpret them in a non-logistic way. The formula is simple: $logit(y) = log(\frac{y + \epsilon}{max(y) - y + \epsilon})$ Where $y$ is your score and $\epsilon$ is an arbitrary small number. Here's an example that I did a while ago when I wanted to experiment with it in R: library(rms) library(lattice) library(cairoDevice) library(ggplot2) # Simulate some data set.seed(10) intercept <- 0 beta1 <- 0.5 beta2 <- 1 n = 1000 xtest <- rnorm(n,1,1) gender <- factor(rbinom(n, 1, .4), labels=c("Male", "Female")) random_noise <- runif(n, -1,1) # Add a ceiling and a floor to simulate a bound score fake_ceiling <- 4 fake_floor <- -1 # Simulate the predictor linpred <- intercept + beta1*xtest^3 + beta2*(gender == "Female") + random_noise # Remove some extremes extreme_roof <- fake_ceiling + abs(diff(range(linpred)))/2 extreme_floor <- fake_floor - abs(diff(range(linpred)))/2 linpred[ linpred > extreme_roof| linpred < extreme_floor ] <- NA #limit the interval and give a ceiling and a floor effect similar to scores linpred[linpred > fake_ceiling] <- fake_ceiling linpred[linpred < fake_floor] <- fake_floor # Just to give the graphs the same look my_ylim <- c(fake_floor - abs(fake_floor)*.25, fake_ceiling + abs(fake_ceiling)*.25) my_xlim <- c(-1.5, 3.5) # Plot df <- data.frame(Outcome = linpred, xtest, gender) ggplot(df, aes(xtest, Outcome, colour = gender)) + geom_point() This gives the following data scatter, as you can see it is clearly bounded and inconvenient: ################################### # Calculate & plot the true lines # ################################### x <- seq(min(xtest), max(xtest), by=.1) y <- beta1*x^3+intercept y_female <- y + beta2 y[y > fake_ceiling] <- fake_ceiling y[y < fake_floor] <- fake_floor y_female[y_female > fake_ceiling] <- fake_ceiling y_female[y_female < fake_floor] <- fake_floor tr_df <- data.frame(x=x, y=y, y_female=y_female) true_line_plot <- xyplot(y + y_female ~ x, data=tr_df, type="l", xlim=my_xlim, ylim=my_ylim, ylab="Outcome", auto.key = list( text = c("Male"," Female"), columns=2)) ########################## # Test regression models # ########################## # Regular linear regression fit_lm <- Glm(linpred~rcs(xtest, 5)+gender, x=T, y=T) boot_fit_lm <- bootcov(fit_lm, B=500) p <- Predict(boot_fit_lm, xtest=seq(-2.5, 3.5, by=.001), gender=c("Male", "Female")) lm_plot <- plot(p, se=T, col.fill=c("#9999FF", "#BBBBFF"), xlim=my_xlim, ylim=my_ylim) This results in the following picture where females are clearly above the upper boundary: # Quantile regression - regular fit_rq <- Rq(formula(fit_lm), x=T, y=T) boot_rq <- bootcov(fit_rq, B=500) # A little disturbing warning: # In rq.fit.br(x, y, tau = tau, ...) : Solution may be nonunique p <- Predict(boot_rq, xtest=seq(-2.5, 3.5, by=.001), gender=c("Male", "Female")) rq_plot <- plot(p, se=T, col.fill=c("#9999FF", "#BBBBFF"), xlim=my_xlim, ylim=my_ylim) This gives the following plot with similar problems: # The logit transformations logit_fn <- function(y, y_min, y_max, epsilon) log((y-(y_min-epsilon))/(y_max+epsilon-y)) antilogit_fn <- function(antiy, y_min, y_max, epsilon) (exp(antiy)*(y_max+epsilon)+y_min-epsilon)/ (1+exp(antiy)) epsilon <- .0001 y_min <- min(linpred, na.rm=T) y_max <- max(linpred, na.rm=T) logit_linpred <- logit_fn(linpred, y_min=y_min, y_max=y_max, epsilon=epsilon) fit_rq_logit <- update(fit_rq, logit_linpred ~ .) boot_rq_logit <- bootcov(fit_rq_logit, B=500) p <- Predict(boot_rq_logit, xtest=seq(-2.5, 3.5, by=.001), gender=c("Male", "Female")) # Change back to org. scale # otherwise the plot will be # on the logit scale transformed_p <- p transformed_p$yhat <- antilogit_fn(p$yhat, y_min=y_min, y_max=y_max, epsilon=epsilon) transformed_p$lower <- antilogit_fn(p$lower, y_min=y_min, y_max=y_max, epsilon=epsilon) transformed_p$upper <- antilogit_fn(p$upper, y_min=y_min, y_max=y_max, epsilon=epsilon) logit_rq_plot <- plot(transformed_p, se=T, col.fill=c("#9999FF", "#BBBBFF"), xlim=my_xlim) The logistic quantile regression that has a very nice bounded prediction: Here you can see the issue with the Beta's that in the retransformed fashion differ in different regions (as expected): # Some issues trying to display the gender factor contrast(boot_rq_logit, list(gender=levels(gender), xtest=c(-1:1)), FUN=function(x)antilogit_fn(x, epsilon)) gender xtest Contrast S.E. Lower Upper Z Pr(>|z|) Male -1 -2.5001505 0.33677523 -3.1602179 -1.84008320 -7.42 0.0000 Female -1 -1.3020162 0.29623080 -1.8826179 -0.72141450 -4.40 0.0000 Male 0 -1.3384751 0.09748767 -1.5295474 -1.14740279 -13.73 0.0000 * Female 0 -0.1403408 0.09887240 -0.3341271 0.05344555 -1.42 0.1558 Male 1 -1.3308691 0.10810012 -1.5427414 -1.11899674 -12.31 0.0000 * Female 1 -0.1327348 0.07605115 -0.2817923 0.01632277 -1.75 0.0809 Redundant contrasts are denoted by * Confidence intervals are 0.95 individual intervals References R. Koenker and G. Bassett Jr, “Regression quantiles,” Econometrica: journal of the Econometric Society, pp. 33–50, 1978. M. Bottai, B. Cai, and R. E. McKeown, “Logistic quantile regression for bounded outcomes,” Statistics in Medicine, vol. 29, no. 2, pp. 309–317, 2010. For the curious the plots were created using this code: # Just for making pretty graphs with the comparison plot compareplot <- function(regr_plot, regr_title, true_plot){ print(regr_plot, position=c(0,0.5,1,1), more=T) trellis.focus("toplevel") panel.text(0.3, .8, regr_title, cex = 1.2, font = 2) trellis.unfocus() print(true_plot, position=c(0,0,1,.5), more=F) trellis.focus("toplevel") panel.text(0.3, .65, "True line", cex = 1.2, font = 2) trellis.unfocus() } Cairo_png("Comp_plot_lm.png", width=10, height=14, pointsize=12) compareplot(lm_plot, "Linear regression", true_line_plot) dev.off() Cairo_png("Comp_plot_rq.png", width=10, height=14, pointsize=12) compareplot(rq_plot, "Quantile regression", true_line_plot) dev.off() Cairo_png("Comp_plot_logit_rq.png", width=10, height=14, pointsize=12) compareplot(logit_rq_plot, "Logit - Quantile regression", true_line_plot) dev.off() Cairo_png("Scat. plot.png") qplot(y=linpred, x=xtest, col=gender, ylab="Outcome") dev.off()
Dealing with regression of unusually bounded response variable
Although I'm not entirely certain of what your problem with linear regression is I'm right now finishing an article about how to analyze bounded outcomes. Since I'm not familiar with Beta regression p
Dealing with regression of unusually bounded response variable Although I'm not entirely certain of what your problem with linear regression is I'm right now finishing an article about how to analyze bounded outcomes. Since I'm not familiar with Beta regression perhaps someone else will answer that option. By your question I understand that you get predictions outside the boundaries. In this case I would go for logistic quantile regression. Quantile regression is a very neat alternative to regular linear regression. You can look at different quantiles and get a much better picture of your data than what's possible with regular linear regression. It is also has no assumptions regarding distribution1. Transformation of a variable can often cause funny effects on linear regression, for instance you have a significance in the logistic transformation but that doesn't translate into the regular value. This is not the case with quantiles, the median is always the median regardless of the transformation function. This allows you to transform back and forth without distorting anything. Prof. Bottai suggested this approach to bounded outcomes2, its an excellent method if you want to do individual predictions but it has some issues when you wan't to look at the beta's and interpret them in a non-logistic way. The formula is simple: $logit(y) = log(\frac{y + \epsilon}{max(y) - y + \epsilon})$ Where $y$ is your score and $\epsilon$ is an arbitrary small number. Here's an example that I did a while ago when I wanted to experiment with it in R: library(rms) library(lattice) library(cairoDevice) library(ggplot2) # Simulate some data set.seed(10) intercept <- 0 beta1 <- 0.5 beta2 <- 1 n = 1000 xtest <- rnorm(n,1,1) gender <- factor(rbinom(n, 1, .4), labels=c("Male", "Female")) random_noise <- runif(n, -1,1) # Add a ceiling and a floor to simulate a bound score fake_ceiling <- 4 fake_floor <- -1 # Simulate the predictor linpred <- intercept + beta1*xtest^3 + beta2*(gender == "Female") + random_noise # Remove some extremes extreme_roof <- fake_ceiling + abs(diff(range(linpred)))/2 extreme_floor <- fake_floor - abs(diff(range(linpred)))/2 linpred[ linpred > extreme_roof| linpred < extreme_floor ] <- NA #limit the interval and give a ceiling and a floor effect similar to scores linpred[linpred > fake_ceiling] <- fake_ceiling linpred[linpred < fake_floor] <- fake_floor # Just to give the graphs the same look my_ylim <- c(fake_floor - abs(fake_floor)*.25, fake_ceiling + abs(fake_ceiling)*.25) my_xlim <- c(-1.5, 3.5) # Plot df <- data.frame(Outcome = linpred, xtest, gender) ggplot(df, aes(xtest, Outcome, colour = gender)) + geom_point() This gives the following data scatter, as you can see it is clearly bounded and inconvenient: ################################### # Calculate & plot the true lines # ################################### x <- seq(min(xtest), max(xtest), by=.1) y <- beta1*x^3+intercept y_female <- y + beta2 y[y > fake_ceiling] <- fake_ceiling y[y < fake_floor] <- fake_floor y_female[y_female > fake_ceiling] <- fake_ceiling y_female[y_female < fake_floor] <- fake_floor tr_df <- data.frame(x=x, y=y, y_female=y_female) true_line_plot <- xyplot(y + y_female ~ x, data=tr_df, type="l", xlim=my_xlim, ylim=my_ylim, ylab="Outcome", auto.key = list( text = c("Male"," Female"), columns=2)) ########################## # Test regression models # ########################## # Regular linear regression fit_lm <- Glm(linpred~rcs(xtest, 5)+gender, x=T, y=T) boot_fit_lm <- bootcov(fit_lm, B=500) p <- Predict(boot_fit_lm, xtest=seq(-2.5, 3.5, by=.001), gender=c("Male", "Female")) lm_plot <- plot(p, se=T, col.fill=c("#9999FF", "#BBBBFF"), xlim=my_xlim, ylim=my_ylim) This results in the following picture where females are clearly above the upper boundary: # Quantile regression - regular fit_rq <- Rq(formula(fit_lm), x=T, y=T) boot_rq <- bootcov(fit_rq, B=500) # A little disturbing warning: # In rq.fit.br(x, y, tau = tau, ...) : Solution may be nonunique p <- Predict(boot_rq, xtest=seq(-2.5, 3.5, by=.001), gender=c("Male", "Female")) rq_plot <- plot(p, se=T, col.fill=c("#9999FF", "#BBBBFF"), xlim=my_xlim, ylim=my_ylim) This gives the following plot with similar problems: # The logit transformations logit_fn <- function(y, y_min, y_max, epsilon) log((y-(y_min-epsilon))/(y_max+epsilon-y)) antilogit_fn <- function(antiy, y_min, y_max, epsilon) (exp(antiy)*(y_max+epsilon)+y_min-epsilon)/ (1+exp(antiy)) epsilon <- .0001 y_min <- min(linpred, na.rm=T) y_max <- max(linpred, na.rm=T) logit_linpred <- logit_fn(linpred, y_min=y_min, y_max=y_max, epsilon=epsilon) fit_rq_logit <- update(fit_rq, logit_linpred ~ .) boot_rq_logit <- bootcov(fit_rq_logit, B=500) p <- Predict(boot_rq_logit, xtest=seq(-2.5, 3.5, by=.001), gender=c("Male", "Female")) # Change back to org. scale # otherwise the plot will be # on the logit scale transformed_p <- p transformed_p$yhat <- antilogit_fn(p$yhat, y_min=y_min, y_max=y_max, epsilon=epsilon) transformed_p$lower <- antilogit_fn(p$lower, y_min=y_min, y_max=y_max, epsilon=epsilon) transformed_p$upper <- antilogit_fn(p$upper, y_min=y_min, y_max=y_max, epsilon=epsilon) logit_rq_plot <- plot(transformed_p, se=T, col.fill=c("#9999FF", "#BBBBFF"), xlim=my_xlim) The logistic quantile regression that has a very nice bounded prediction: Here you can see the issue with the Beta's that in the retransformed fashion differ in different regions (as expected): # Some issues trying to display the gender factor contrast(boot_rq_logit, list(gender=levels(gender), xtest=c(-1:1)), FUN=function(x)antilogit_fn(x, epsilon)) gender xtest Contrast S.E. Lower Upper Z Pr(>|z|) Male -1 -2.5001505 0.33677523 -3.1602179 -1.84008320 -7.42 0.0000 Female -1 -1.3020162 0.29623080 -1.8826179 -0.72141450 -4.40 0.0000 Male 0 -1.3384751 0.09748767 -1.5295474 -1.14740279 -13.73 0.0000 * Female 0 -0.1403408 0.09887240 -0.3341271 0.05344555 -1.42 0.1558 Male 1 -1.3308691 0.10810012 -1.5427414 -1.11899674 -12.31 0.0000 * Female 1 -0.1327348 0.07605115 -0.2817923 0.01632277 -1.75 0.0809 Redundant contrasts are denoted by * Confidence intervals are 0.95 individual intervals References R. Koenker and G. Bassett Jr, “Regression quantiles,” Econometrica: journal of the Econometric Society, pp. 33–50, 1978. M. Bottai, B. Cai, and R. E. McKeown, “Logistic quantile regression for bounded outcomes,” Statistics in Medicine, vol. 29, no. 2, pp. 309–317, 2010. For the curious the plots were created using this code: # Just for making pretty graphs with the comparison plot compareplot <- function(regr_plot, regr_title, true_plot){ print(regr_plot, position=c(0,0.5,1,1), more=T) trellis.focus("toplevel") panel.text(0.3, .8, regr_title, cex = 1.2, font = 2) trellis.unfocus() print(true_plot, position=c(0,0,1,.5), more=F) trellis.focus("toplevel") panel.text(0.3, .65, "True line", cex = 1.2, font = 2) trellis.unfocus() } Cairo_png("Comp_plot_lm.png", width=10, height=14, pointsize=12) compareplot(lm_plot, "Linear regression", true_line_plot) dev.off() Cairo_png("Comp_plot_rq.png", width=10, height=14, pointsize=12) compareplot(rq_plot, "Quantile regression", true_line_plot) dev.off() Cairo_png("Comp_plot_logit_rq.png", width=10, height=14, pointsize=12) compareplot(logit_rq_plot, "Logit - Quantile regression", true_line_plot) dev.off() Cairo_png("Scat. plot.png") qplot(y=linpred, x=xtest, col=gender, ylab="Outcome") dev.off()
Dealing with regression of unusually bounded response variable Although I'm not entirely certain of what your problem with linear regression is I'm right now finishing an article about how to analyze bounded outcomes. Since I'm not familiar with Beta regression p
24,118
How do you use the EM algorithm to calculate MLEs for a latent variable formulation of a zero inflated Poisson model?
The root of the difficulty you are having lies in the sentence: Then using the EM algorithm, we can maximize the second log-likelihood. As you have observed, you can't. Instead, what you maximize is the expected value of the second log likelihood (known as the "complete data log likelihood"), where the expected value is taken over the $z_i$. This leads to an iterative procedure, where at the $k^{th}$ iteration you calculate the expected values of the $z_i$ given the parameter estimates from the $(k-1)^{th}$ iteration (this is known as the "E-step",) then substitute them into the complete data log likelihood (see EDIT below for why we can do this in this case), and maximize that with respect to the parameters to get the estimates for the current iteration (the "M-step".) The complete-data log likelihood for the zero-inflated Poisson in the simplest case - two parameters, say $\lambda$ and $p$ - allows for substantial simplification when it comes to the M-step, and this carries over to some extent to your form. I'll show you how that works in the simple case via some R code, so you can see the essence of it. I won't simplify as much as possible, since that might cause a loss of clarity when you think of your problem: # Generate data # Lambda = 1, p(zero) = 0.1 x <- rpois(10000,1) x[1:1000] <- 0 # Sufficient statistic for the ZIP sum.x <- sum(x) # (Poor) starting values for parameter estimates phat <- 0.5 lhat <- 2.0 zhat <- rep(0,length(x)) for (i in 1:100) { # zhat[x>0] <- 0 always, so no need to make the assignment at every iteration zhat[x==0] <- phat/(phat + (1-phat)*exp(-lhat)) lhat <- sum.x/sum(1-zhat) # in effect, removing E(# zeroes due to z=1) phat <- mean(zhat) cat("Iteration: ",i, " lhat: ",lhat, " phat: ", phat,"\n") } Iteration: 1 lhat: 1.443948 phat: 0.3792712 Iteration: 2 lhat: 1.300164 phat: 0.3106252 Iteration: 3 lhat: 1.225007 phat: 0.268331 ... Iteration: 99 lhat: 0.9883329 phat: 0.09311933 Iteration: 100 lhat: 0.9883194 phat: 0.09310694 In your case, at each step you'll do a weighted Poisson regression where the weights are 1-zhat to get the estimates of $\beta$ and therefore $\lambda_i$, and then maximize: $\sum (\mathbb{E}z_i\log{p_i} + (1-\mathbb{E}z_i)\log{(1-p_i)})$ with respect to the coefficient vector of your matrix $\mathbf{G}$ to get the estimates of $p_i$. The expected values $\mathbb{E}z_i = p_i/(p_i+(1-p_i)\exp{(-\lambda_i)})$, again calculated at each iteration. If you want to do this for real data, as opposed to just understanding the algorithm, R packages already exist; here's an example http://www.ats.ucla.edu/stat/r/dae/zipoisson.htm using the pscl library. EDIT: I should emphasize that what we are doing is maximizing the expected value of the complete-data log likelihood, NOT maximizing the complete-data log likelihood with the expected values of the missing data/latent variables plugged in. As it happens, if the complete-data log likelihood is linear in the missing data, as it is here, the two approaches are the same, but otherwise, they aren't.
How do you use the EM algorithm to calculate MLEs for a latent variable formulation of a zero inflat
The root of the difficulty you are having lies in the sentence: Then using the EM algorithm, we can maximize the second log-likelihood. As you have observed, you can't. Instead, what you maximize
How do you use the EM algorithm to calculate MLEs for a latent variable formulation of a zero inflated Poisson model? The root of the difficulty you are having lies in the sentence: Then using the EM algorithm, we can maximize the second log-likelihood. As you have observed, you can't. Instead, what you maximize is the expected value of the second log likelihood (known as the "complete data log likelihood"), where the expected value is taken over the $z_i$. This leads to an iterative procedure, where at the $k^{th}$ iteration you calculate the expected values of the $z_i$ given the parameter estimates from the $(k-1)^{th}$ iteration (this is known as the "E-step",) then substitute them into the complete data log likelihood (see EDIT below for why we can do this in this case), and maximize that with respect to the parameters to get the estimates for the current iteration (the "M-step".) The complete-data log likelihood for the zero-inflated Poisson in the simplest case - two parameters, say $\lambda$ and $p$ - allows for substantial simplification when it comes to the M-step, and this carries over to some extent to your form. I'll show you how that works in the simple case via some R code, so you can see the essence of it. I won't simplify as much as possible, since that might cause a loss of clarity when you think of your problem: # Generate data # Lambda = 1, p(zero) = 0.1 x <- rpois(10000,1) x[1:1000] <- 0 # Sufficient statistic for the ZIP sum.x <- sum(x) # (Poor) starting values for parameter estimates phat <- 0.5 lhat <- 2.0 zhat <- rep(0,length(x)) for (i in 1:100) { # zhat[x>0] <- 0 always, so no need to make the assignment at every iteration zhat[x==0] <- phat/(phat + (1-phat)*exp(-lhat)) lhat <- sum.x/sum(1-zhat) # in effect, removing E(# zeroes due to z=1) phat <- mean(zhat) cat("Iteration: ",i, " lhat: ",lhat, " phat: ", phat,"\n") } Iteration: 1 lhat: 1.443948 phat: 0.3792712 Iteration: 2 lhat: 1.300164 phat: 0.3106252 Iteration: 3 lhat: 1.225007 phat: 0.268331 ... Iteration: 99 lhat: 0.9883329 phat: 0.09311933 Iteration: 100 lhat: 0.9883194 phat: 0.09310694 In your case, at each step you'll do a weighted Poisson regression where the weights are 1-zhat to get the estimates of $\beta$ and therefore $\lambda_i$, and then maximize: $\sum (\mathbb{E}z_i\log{p_i} + (1-\mathbb{E}z_i)\log{(1-p_i)})$ with respect to the coefficient vector of your matrix $\mathbf{G}$ to get the estimates of $p_i$. The expected values $\mathbb{E}z_i = p_i/(p_i+(1-p_i)\exp{(-\lambda_i)})$, again calculated at each iteration. If you want to do this for real data, as opposed to just understanding the algorithm, R packages already exist; here's an example http://www.ats.ucla.edu/stat/r/dae/zipoisson.htm using the pscl library. EDIT: I should emphasize that what we are doing is maximizing the expected value of the complete-data log likelihood, NOT maximizing the complete-data log likelihood with the expected values of the missing data/latent variables plugged in. As it happens, if the complete-data log likelihood is linear in the missing data, as it is here, the two approaches are the same, but otherwise, they aren't.
How do you use the EM algorithm to calculate MLEs for a latent variable formulation of a zero inflat The root of the difficulty you are having lies in the sentence: Then using the EM algorithm, we can maximize the second log-likelihood. As you have observed, you can't. Instead, what you maximize
24,119
Why do I get wildly different results for poly(raw=T) vs. poly()?
I think this is a bug in the predict function (and hence my fault), which in fact nlme does not share. (Edit: should be fixed in most recent R-forge version of lme4.) See below for an example ... I think your understanding of orthogonal polynomials is probably just fine. The tricky thing you need to know about them if you are trying to write a predict method for a class of models is that the basis for the orthogonal polynomials is defined based on a given set of data, so if you naively (like I did!) use model.matrix to try to generate the design matrix for a new set of data, you get a new basis -- which no longer makes sense with the old parameters. Until I get this fixed, I may need to put a trap in that tells people that predict doesn't work with orthogonal polynomial bases (or spline bases, which have the same property). d <- expand.grid(x=seq(0,1,length=50),f=LETTERS[1:10]) set.seed(1001) u.int <- rnorm(10,sd=0.5) u.slope <- rnorm(10,sd=0.2) u.quad <- rnorm(10,sd=0.1) d <- transform(d, ypred = (1+u.int[f])+ (2+u.slope[f])*x- (1+u.quad[f])*x^2) d$y <- rnorm(nrow(d),mean=d$ypred,sd=0.2) ggplot(d,aes(x=x,y=y,colour=f))+geom_line()+ geom_line(aes(y=ypred),linetype=2) library(lme4) fm1 <- lmer(y~poly(x,2,raw=TRUE)+(1|f)+(0+x|f)+(0+I(x^2)|f), data=d) fm2 <- lmer(y~poly(x,2)+(1|f)+(0+x|f)+(0+I(x^2)|f), data=d) newdat <- data.frame(x=unique(d$x)) plot(predict(fm1,newdata=newdat,REform=NA)) lines(predict(fm2,newdata=newdat,REform=NA),col=2) detach("package:lme4") library(nlme) fm3 <- lme(y~poly(x,2,raw=TRUE), random=list(~1|f,~0+x|f,~0+I(x^2)|f), data=d) VarCorr(fm3) fm4 <- lme(y~poly(x,2), random=list(~1|f,~0+x|f,~0+I(x^2)|f), data=d) newdat <- data.frame(x=unique(d$x)) lines(predict(fm3,newdata=newdat,level=0),col=4) lines(predict(fm4,newdata=newdat,level=0),col=5)
Why do I get wildly different results for poly(raw=T) vs. poly()?
I think this is a bug in the predict function (and hence my fault), which in fact nlme does not share. (Edit: should be fixed in most recent R-forge version of lme4.) See below for an example ... I th
Why do I get wildly different results for poly(raw=T) vs. poly()? I think this is a bug in the predict function (and hence my fault), which in fact nlme does not share. (Edit: should be fixed in most recent R-forge version of lme4.) See below for an example ... I think your understanding of orthogonal polynomials is probably just fine. The tricky thing you need to know about them if you are trying to write a predict method for a class of models is that the basis for the orthogonal polynomials is defined based on a given set of data, so if you naively (like I did!) use model.matrix to try to generate the design matrix for a new set of data, you get a new basis -- which no longer makes sense with the old parameters. Until I get this fixed, I may need to put a trap in that tells people that predict doesn't work with orthogonal polynomial bases (or spline bases, which have the same property). d <- expand.grid(x=seq(0,1,length=50),f=LETTERS[1:10]) set.seed(1001) u.int <- rnorm(10,sd=0.5) u.slope <- rnorm(10,sd=0.2) u.quad <- rnorm(10,sd=0.1) d <- transform(d, ypred = (1+u.int[f])+ (2+u.slope[f])*x- (1+u.quad[f])*x^2) d$y <- rnorm(nrow(d),mean=d$ypred,sd=0.2) ggplot(d,aes(x=x,y=y,colour=f))+geom_line()+ geom_line(aes(y=ypred),linetype=2) library(lme4) fm1 <- lmer(y~poly(x,2,raw=TRUE)+(1|f)+(0+x|f)+(0+I(x^2)|f), data=d) fm2 <- lmer(y~poly(x,2)+(1|f)+(0+x|f)+(0+I(x^2)|f), data=d) newdat <- data.frame(x=unique(d$x)) plot(predict(fm1,newdata=newdat,REform=NA)) lines(predict(fm2,newdata=newdat,REform=NA),col=2) detach("package:lme4") library(nlme) fm3 <- lme(y~poly(x,2,raw=TRUE), random=list(~1|f,~0+x|f,~0+I(x^2)|f), data=d) VarCorr(fm3) fm4 <- lme(y~poly(x,2), random=list(~1|f,~0+x|f,~0+I(x^2)|f), data=d) newdat <- data.frame(x=unique(d$x)) lines(predict(fm3,newdata=newdat,level=0),col=4) lines(predict(fm4,newdata=newdat,level=0),col=5)
Why do I get wildly different results for poly(raw=T) vs. poly()? I think this is a bug in the predict function (and hence my fault), which in fact nlme does not share. (Edit: should be fixed in most recent R-forge version of lme4.) See below for an example ... I th
24,120
Calculating VC-dimension of a neural network
I stumbled across your post while hunting for a general formula for calculating VC dimensions on neural nets, but apparently there isn't one. Apparently we only have a hodgepodge of disparate VC equations that only apply in certain narrow cases. Caution: I'm basing this on old research I barely understand, on the concept of VC Dimensions, which I'm only now learning about. Nevertheless, it may be worthwhile to skim this paper by Peter L. Bartlett and Wolfgang Maass1 on the calculability of VC dimensions. Note how they go to great lengths to derive VC formulas in 13 theorems, but how diverse and numerous the necessary conditions are for each. These prerequisites range from the number of operators in activation functions to the types of jumps allowed, the number of neurons and their positions, the bit depth of the input, etc.; there are so many of these scattered "gotchas" that they render the formulas useful only for certain narrow classes of problems. To make matters worse, they point out in Theorems 5 and 8 that sigmoidal activation functions are particularly difficult to calculate VC figures for. On pp. 6-7 they write: "While the VC-dimension of networks with piecewise polynomial activation functions is well understood, most applications of neural networks use the logistic sigmoid function, or Gaussian radial basis function. Unfortunately, it is not possible to compute such functions using a finite number of the arithmetic operations listed in Theorem 5. However, Karpinski and Macintyre [Karpinski and Macintyre, 1997] extended Theorem 5 to allow the computation of exponentials. The proof uses the same ideas, but the bound on the number of solutions of a system of equations is substantially more difficult." I also ran across this paper with the encouraging title of "Bounding VC-Dimension for Neural Networks: Progress and Prospects."2 A lot of the math is over my head and I didn't skim it long enough to overcome my lack of translation skills, but I suspect it doesn't offer any earth-shattering solutions, since it predates the second edition of the book Bartlett and Maass, who cite a later work by the same authors. Perhaps later research over the past 20 years has improved the calculability of VC dimensions for neural nets, but most of the references I've found seem to date from the mid-'90s; apparently there was a flurry of work on the subject back then that has since died down. If the capabilities haven't been extended by more recent scholarship far beyond what they were in the '90s, then I hope someone comes up with a more widely applicable solution soon so I can start calculating VC dimensions on my neural nets as well. Sorry I couldn't provide a more encouraging answer, for the time being at least. 1Bartlett, Peter L. and Maass, Wolfgang, 2003, "Vapnik-Chervonenkis Dimension of Neural Nets," pp. 1188-1192 in The Handbook of Brain Theory and Neural Networks, Arbib, Michael A. ed. MIT Press: Cambridge, Mass. 2Karpinski, Marek and Macintyre, Angus, 1995, "Bounding VC-Dimension for Neural Networks: Progress and Prospects," pp. 337–341 in Proceedings of the 2nd European Conference on Computational Learning Theory, Barcelona, Spain. Vitanyi, P. ed. Lecture Notes in Artificial Intelligence, No. 904. Springer: Berlin.
Calculating VC-dimension of a neural network
I stumbled across your post while hunting for a general formula for calculating VC dimensions on neural nets, but apparently there isn't one. Apparently we only have a hodgepodge of disparate VC equat
Calculating VC-dimension of a neural network I stumbled across your post while hunting for a general formula for calculating VC dimensions on neural nets, but apparently there isn't one. Apparently we only have a hodgepodge of disparate VC equations that only apply in certain narrow cases. Caution: I'm basing this on old research I barely understand, on the concept of VC Dimensions, which I'm only now learning about. Nevertheless, it may be worthwhile to skim this paper by Peter L. Bartlett and Wolfgang Maass1 on the calculability of VC dimensions. Note how they go to great lengths to derive VC formulas in 13 theorems, but how diverse and numerous the necessary conditions are for each. These prerequisites range from the number of operators in activation functions to the types of jumps allowed, the number of neurons and their positions, the bit depth of the input, etc.; there are so many of these scattered "gotchas" that they render the formulas useful only for certain narrow classes of problems. To make matters worse, they point out in Theorems 5 and 8 that sigmoidal activation functions are particularly difficult to calculate VC figures for. On pp. 6-7 they write: "While the VC-dimension of networks with piecewise polynomial activation functions is well understood, most applications of neural networks use the logistic sigmoid function, or Gaussian radial basis function. Unfortunately, it is not possible to compute such functions using a finite number of the arithmetic operations listed in Theorem 5. However, Karpinski and Macintyre [Karpinski and Macintyre, 1997] extended Theorem 5 to allow the computation of exponentials. The proof uses the same ideas, but the bound on the number of solutions of a system of equations is substantially more difficult." I also ran across this paper with the encouraging title of "Bounding VC-Dimension for Neural Networks: Progress and Prospects."2 A lot of the math is over my head and I didn't skim it long enough to overcome my lack of translation skills, but I suspect it doesn't offer any earth-shattering solutions, since it predates the second edition of the book Bartlett and Maass, who cite a later work by the same authors. Perhaps later research over the past 20 years has improved the calculability of VC dimensions for neural nets, but most of the references I've found seem to date from the mid-'90s; apparently there was a flurry of work on the subject back then that has since died down. If the capabilities haven't been extended by more recent scholarship far beyond what they were in the '90s, then I hope someone comes up with a more widely applicable solution soon so I can start calculating VC dimensions on my neural nets as well. Sorry I couldn't provide a more encouraging answer, for the time being at least. 1Bartlett, Peter L. and Maass, Wolfgang, 2003, "Vapnik-Chervonenkis Dimension of Neural Nets," pp. 1188-1192 in The Handbook of Brain Theory and Neural Networks, Arbib, Michael A. ed. MIT Press: Cambridge, Mass. 2Karpinski, Marek and Macintyre, Angus, 1995, "Bounding VC-Dimension for Neural Networks: Progress and Prospects," pp. 337–341 in Proceedings of the 2nd European Conference on Computational Learning Theory, Barcelona, Spain. Vitanyi, P. ed. Lecture Notes in Artificial Intelligence, No. 904. Springer: Berlin.
Calculating VC-dimension of a neural network I stumbled across your post while hunting for a general formula for calculating VC dimensions on neural nets, but apparently there isn't one. Apparently we only have a hodgepodge of disparate VC equat
24,121
Calculating VC-dimension of a neural network
Here is the latest work: http://jmlr.org/papers/v20/17-612.html. Basically, a network with $W$ weights, $L$ layers, and relu activations follows: $$ c WL \log(W/L) \leq d \leq CWL \log(WL) $$ for some constants $c$ and $C$, and the VC dimension $d$. Given the validity of the work, I think it gives handy bounds. I am not sure, though, the tightness of the bounds (and especially the constants $c$ and $C$) as I haven't fully read it.
Calculating VC-dimension of a neural network
Here is the latest work: http://jmlr.org/papers/v20/17-612.html. Basically, a network with $W$ weights, $L$ layers, and relu activations follows: $$ c WL \log(W/L) \leq d \leq CWL \log(WL) $$ for some
Calculating VC-dimension of a neural network Here is the latest work: http://jmlr.org/papers/v20/17-612.html. Basically, a network with $W$ weights, $L$ layers, and relu activations follows: $$ c WL \log(W/L) \leq d \leq CWL \log(WL) $$ for some constants $c$ and $C$, and the VC dimension $d$. Given the validity of the work, I think it gives handy bounds. I am not sure, though, the tightness of the bounds (and especially the constants $c$ and $C$) as I haven't fully read it.
Calculating VC-dimension of a neural network Here is the latest work: http://jmlr.org/papers/v20/17-612.html. Basically, a network with $W$ weights, $L$ layers, and relu activations follows: $$ c WL \log(W/L) \leq d \leq CWL \log(WL) $$ for some
24,122
Collinearity between categorical variables
Collinearity between factors is quite complicated. The classical example is the one you get when you group and dummy-encode the three continuous variables 'age', 'period' and 'year'. It is analysed in: Kupper, L.L., Janis, J.M., Salama, I.A., Yoshizawa, C.N. Greenberg, B.G., & Winsborough, H.H. (1983). Age-period-cohort analysis: an illustration in the problems assessing interaction in one observation per cell data, Communicatios in Statistics - Theory and Methods, 12, 23, pp. 201-217. The coefficients you get, after removing four (not three) references are only identified up to an unknown linear trend. This can be analysed because the collinearity arises from a known collinearity in the source variables (age+year=period). Some work has also been done on spurious collinearity between two factors. It has been analysed in: Eccleston, J.A. & Hedayat, A. (1974). On the theory of connected designs: Characterization and optimality, The Annals of Statistics, 2, 6, pp. 1238-1255. The upshot is that collinearity among categorical variables means that the dataset must be split into disconnected parts, with a reference level in each component. Estimated coefficients from different components can not be compared directly. For more complicated collinearities between three or more factors, the situation is complicated. There do exist procedures for finding estimable functions, i.e. linear combinations of the coefficients which are interpretable, e.g. in: "On the connectivity of row-column designs" by Godolphin and Godolphin in Utilitas Mathematica (60) pp 51-65 But to my knowledge no general silver-bullet for handling such collinearities in an intuitive way exists.
Collinearity between categorical variables
Collinearity between factors is quite complicated. The classical example is the one you get when you group and dummy-encode the three continuous variables 'age', 'period' and 'year'. It is analysed in
Collinearity between categorical variables Collinearity between factors is quite complicated. The classical example is the one you get when you group and dummy-encode the three continuous variables 'age', 'period' and 'year'. It is analysed in: Kupper, L.L., Janis, J.M., Salama, I.A., Yoshizawa, C.N. Greenberg, B.G., & Winsborough, H.H. (1983). Age-period-cohort analysis: an illustration in the problems assessing interaction in one observation per cell data, Communicatios in Statistics - Theory and Methods, 12, 23, pp. 201-217. The coefficients you get, after removing four (not three) references are only identified up to an unknown linear trend. This can be analysed because the collinearity arises from a known collinearity in the source variables (age+year=period). Some work has also been done on spurious collinearity between two factors. It has been analysed in: Eccleston, J.A. & Hedayat, A. (1974). On the theory of connected designs: Characterization and optimality, The Annals of Statistics, 2, 6, pp. 1238-1255. The upshot is that collinearity among categorical variables means that the dataset must be split into disconnected parts, with a reference level in each component. Estimated coefficients from different components can not be compared directly. For more complicated collinearities between three or more factors, the situation is complicated. There do exist procedures for finding estimable functions, i.e. linear combinations of the coefficients which are interpretable, e.g. in: "On the connectivity of row-column designs" by Godolphin and Godolphin in Utilitas Mathematica (60) pp 51-65 But to my knowledge no general silver-bullet for handling such collinearities in an intuitive way exists.
Collinearity between categorical variables Collinearity between factors is quite complicated. The classical example is the one you get when you group and dummy-encode the three continuous variables 'age', 'period' and 'year'. It is analysed in
24,123
Collinearity between categorical variables
After having a chat with some of the stats people around the place. It seems this kind of question may not be the most correct question to answer. Using ANOVA (or similar methods) to investigate genetic and diagnostic interactions on neuropsychological measures when they are highly correlated is a difficult question. I've been pointed instead to examine the structure of the data with structural equation modelling. This answer will get updated as I learn more about SEM.
Collinearity between categorical variables
After having a chat with some of the stats people around the place. It seems this kind of question may not be the most correct question to answer. Using ANOVA (or similar methods) to investigate genet
Collinearity between categorical variables After having a chat with some of the stats people around the place. It seems this kind of question may not be the most correct question to answer. Using ANOVA (or similar methods) to investigate genetic and diagnostic interactions on neuropsychological measures when they are highly correlated is a difficult question. I've been pointed instead to examine the structure of the data with structural equation modelling. This answer will get updated as I learn more about SEM.
Collinearity between categorical variables After having a chat with some of the stats people around the place. It seems this kind of question may not be the most correct question to answer. Using ANOVA (or similar methods) to investigate genet
24,124
Can we compare correlations between groups by comparing regression slopes?
Everything that you have written is correct. You can always test out things like that with a toy example. Here is an example with R: library(MASS) rho <- .5 ### the true correlation in both groups S1 <- matrix(c( 1, rho, rho, 1), nrow=2) S2 <- matrix(c(16, 4*rho, 4*rho, 1), nrow=2) cov2cor(S1) cov2cor(S2) xy1 <- mvrnorm(1000, mu=c(0,0), Sigma=S1) xy2 <- mvrnorm(1000, mu=c(0,0), Sigma=S2) x <- c(xy1[,1], xy2[,1]) y <- c(xy1[,2], xy2[,2]) group <- c(rep(0, 1000), rep(1, 1000)) summary(lm(y ~ x + group + x:group)) What you will find that the interaction is highly significant, even though the true correlation is the same in both groups. Why does that happen? Because the raw regression coefficients in the two groups reflect not only the strength of the correlation, but also the scaling of X (and Y) in the two groups. Since those scalings differ, the interaction is significant. This is an important point, since it is often believed that to test the difference in the correlation, you just need to test the interaction in the model above. Let's continue: summary(lm(xy2[,2] ~ xy2[,1]))$coef[2] - summary(lm(xy1[,2] ~ xy1[,1]))$coef[2] This will show you that the difference in the regression coefficients for the model fitted separately in the two groups will give you exactly the same value as the interaction term. What we are really interested in though is the difference in the correlations: cor(xy1)[1,2] cor(xy2)[1,2] cor(xy2)[1,2] - cor(xy1)[1,2] You will find that this difference is essentially zero. Let's standardize X and Y within the two groups and refit the full model: x <- c(scale(xy1[,1]), scale(xy2[,1])) y <- c(scale(xy1[,2]), scale(xy2[,2])) summary(lm(y ~ x + x:group - 1)) Note that I am not including the intercept or the group main effect here, because they are zero by definition. You will find that the coefficient for x is equal to the correlation for group 1 and the coefficient for the interaction is equal to the difference in the correlations for the two groups. Now, for your question whether it would be better to use this approach versus using the test that makes use of Fisher's r-to-z transformation. EDIT The standard errors of the regression coefficients that are calculated when you standardize the X and Y values within the groups do not take this standardization into consideration. Therefore, they are not correct. Accordingly, the t-test for the interaction does not control the Type I error rate adequately. I conducted a simulation study to examine this. When $\rho_1 = \rho_2 = 0$, then the Type I error is controlled. However, when $\rho_1 = \rho_2 \ne 0$, then the Type I error of the t-test tends to be overly conservative (i.e., it does not reject often enough for a given $\alpha$ value). On the other hand, the test that makes use of Fisher's r-to-z transformation does perform adequately, regardless of the size of the true correlations in both groups (except when the group sizes get very small and the true correlations in the two groups get very close to $\pm1$. Conclusion: If you want to test for a difference in correlations, use Fisher's r-to-z transformation and test the difference between those values.
Can we compare correlations between groups by comparing regression slopes?
Everything that you have written is correct. You can always test out things like that with a toy example. Here is an example with R: library(MASS) rho <- .5 ### the true correlation in both groups
Can we compare correlations between groups by comparing regression slopes? Everything that you have written is correct. You can always test out things like that with a toy example. Here is an example with R: library(MASS) rho <- .5 ### the true correlation in both groups S1 <- matrix(c( 1, rho, rho, 1), nrow=2) S2 <- matrix(c(16, 4*rho, 4*rho, 1), nrow=2) cov2cor(S1) cov2cor(S2) xy1 <- mvrnorm(1000, mu=c(0,0), Sigma=S1) xy2 <- mvrnorm(1000, mu=c(0,0), Sigma=S2) x <- c(xy1[,1], xy2[,1]) y <- c(xy1[,2], xy2[,2]) group <- c(rep(0, 1000), rep(1, 1000)) summary(lm(y ~ x + group + x:group)) What you will find that the interaction is highly significant, even though the true correlation is the same in both groups. Why does that happen? Because the raw regression coefficients in the two groups reflect not only the strength of the correlation, but also the scaling of X (and Y) in the two groups. Since those scalings differ, the interaction is significant. This is an important point, since it is often believed that to test the difference in the correlation, you just need to test the interaction in the model above. Let's continue: summary(lm(xy2[,2] ~ xy2[,1]))$coef[2] - summary(lm(xy1[,2] ~ xy1[,1]))$coef[2] This will show you that the difference in the regression coefficients for the model fitted separately in the two groups will give you exactly the same value as the interaction term. What we are really interested in though is the difference in the correlations: cor(xy1)[1,2] cor(xy2)[1,2] cor(xy2)[1,2] - cor(xy1)[1,2] You will find that this difference is essentially zero. Let's standardize X and Y within the two groups and refit the full model: x <- c(scale(xy1[,1]), scale(xy2[,1])) y <- c(scale(xy1[,2]), scale(xy2[,2])) summary(lm(y ~ x + x:group - 1)) Note that I am not including the intercept or the group main effect here, because they are zero by definition. You will find that the coefficient for x is equal to the correlation for group 1 and the coefficient for the interaction is equal to the difference in the correlations for the two groups. Now, for your question whether it would be better to use this approach versus using the test that makes use of Fisher's r-to-z transformation. EDIT The standard errors of the regression coefficients that are calculated when you standardize the X and Y values within the groups do not take this standardization into consideration. Therefore, they are not correct. Accordingly, the t-test for the interaction does not control the Type I error rate adequately. I conducted a simulation study to examine this. When $\rho_1 = \rho_2 = 0$, then the Type I error is controlled. However, when $\rho_1 = \rho_2 \ne 0$, then the Type I error of the t-test tends to be overly conservative (i.e., it does not reject often enough for a given $\alpha$ value). On the other hand, the test that makes use of Fisher's r-to-z transformation does perform adequately, regardless of the size of the true correlations in both groups (except when the group sizes get very small and the true correlations in the two groups get very close to $\pm1$. Conclusion: If you want to test for a difference in correlations, use Fisher's r-to-z transformation and test the difference between those values.
Can we compare correlations between groups by comparing regression slopes? Everything that you have written is correct. You can always test out things like that with a toy example. Here is an example with R: library(MASS) rho <- .5 ### the true correlation in both groups
24,125
What is B. D. Ripley's method of seeding the Mersenne-Twister RNG?
I don't know if it's documented except in the code (src/main/RNG.c) static void RNG_Init(RNGtype kind, Int32 seed) { int j; BM_norm_keep = 0.0; /* zap Box-Muller history */ /* Initial scrambling */ for (j = 0; j < 50; j++) seed = (69069 * seed + 1); switch (kind) { case WICHMANN_HILL: case MARSAGLIA_MULTICARRY: case SUPER_DUPER: case MERSENNE_TWISTER: /* i_seed[0] is mti, *but* this is needed for historical consistency */ for (j = 0; j < RNG_Table[kind].n_seed; j++) { seed = (69069 * seed + 1); RNG_Table[kind].i_seed[j] = seed; } FixupSeeds(kind, 1); /* snippety snip */ } That is, the initialisation works by running a linear congruential generator $x\mapsto 69069\cdot x+1$ for 50 iterations, then enough iterations to fill up the seed, then imposing any constraints needed by the specific generator. This makes sure any user-supplied seeds (often small integers) end up spread across the seed space of the generator. This congruential generator is due to Marsaglia, and Prof Ripley used and recommended it as a pseudorandom number source back in the days when the demand for random numbers was low enough for its period to be sufficient.
What is B. D. Ripley's method of seeding the Mersenne-Twister RNG?
I don't know if it's documented except in the code (src/main/RNG.c) static void RNG_Init(RNGtype kind, Int32 seed) { int j; BM_norm_keep = 0.0; /* zap Box-Muller history */ /* Initial sc
What is B. D. Ripley's method of seeding the Mersenne-Twister RNG? I don't know if it's documented except in the code (src/main/RNG.c) static void RNG_Init(RNGtype kind, Int32 seed) { int j; BM_norm_keep = 0.0; /* zap Box-Muller history */ /* Initial scrambling */ for (j = 0; j < 50; j++) seed = (69069 * seed + 1); switch (kind) { case WICHMANN_HILL: case MARSAGLIA_MULTICARRY: case SUPER_DUPER: case MERSENNE_TWISTER: /* i_seed[0] is mti, *but* this is needed for historical consistency */ for (j = 0; j < RNG_Table[kind].n_seed; j++) { seed = (69069 * seed + 1); RNG_Table[kind].i_seed[j] = seed; } FixupSeeds(kind, 1); /* snippety snip */ } That is, the initialisation works by running a linear congruential generator $x\mapsto 69069\cdot x+1$ for 50 iterations, then enough iterations to fill up the seed, then imposing any constraints needed by the specific generator. This makes sure any user-supplied seeds (often small integers) end up spread across the seed space of the generator. This congruential generator is due to Marsaglia, and Prof Ripley used and recommended it as a pseudorandom number source back in the days when the demand for random numbers was low enough for its period to be sufficient.
What is B. D. Ripley's method of seeding the Mersenne-Twister RNG? I don't know if it's documented except in the code (src/main/RNG.c) static void RNG_Init(RNGtype kind, Int32 seed) { int j; BM_norm_keep = 0.0; /* zap Box-Muller history */ /* Initial sc
24,126
How did Cross Validation become the "Golden Standard" of Measuring the Performance of Statistical Models?
Consider a population $Y|X$ that follows some distribution according to a true model, and you have a set of trained models $f(X,\theta)$ that make predictions of $Y$ given $X$ and are parameterized by $\theta$. The goal is to find out what the error of the models is, in making predictions about samples from the population, as function of the parameter $\theta$, and to select the model with the lowest error. To achieve this goal we can sample the population (the validation data set) and observe the performance/error of the models for the sample as function of $\theta$ and use that to estimate the performance/error for the entire population. Now, our observations based on a sample will not be perfect, but the found empirical distribution of the performance/error (or derived quantities, e.g the average performance/error) will be close to the real value (provided a sufficiently large sample). Or at least, according to the Glivenko-Cantelli theorem the empirical distribution can be made as close to the real distribution as we want by increasing the size of the sample size (the validation data set). Since the convergence of the empirical distribution towards the true distribution of the performance/error is uniform, any derived quantity (e.g. the mean performance/error) will also convergence towards the true value (in the case of the mean one could also use the law of large numbers). So the 'theoretical guarantee' is the law of large numbers, or more general the Glivenko-Cantelli theorem. Note 1: On a side note, I have heard that the "attractive theoretical promises" made by the Central Limit Theorem and the Bootstrap Method don't tend to be as "attractive" in reality... That's why statistics is not simply mathematics. Indeed this theoretical guarantee is only a guarantee that the estimates are consistent. It means that the estimates convergence to the true value, but the practical use might be low if the rate of convergence is slow, or if the initial variance is large.
How did Cross Validation become the "Golden Standard" of Measuring the Performance of Statistical Mo
Consider a population $Y|X$ that follows some distribution according to a true model, and you have a set of trained models $f(X,\theta)$ that make predictions of $Y$ given $X$ and are parameterized by
How did Cross Validation become the "Golden Standard" of Measuring the Performance of Statistical Models? Consider a population $Y|X$ that follows some distribution according to a true model, and you have a set of trained models $f(X,\theta)$ that make predictions of $Y$ given $X$ and are parameterized by $\theta$. The goal is to find out what the error of the models is, in making predictions about samples from the population, as function of the parameter $\theta$, and to select the model with the lowest error. To achieve this goal we can sample the population (the validation data set) and observe the performance/error of the models for the sample as function of $\theta$ and use that to estimate the performance/error for the entire population. Now, our observations based on a sample will not be perfect, but the found empirical distribution of the performance/error (or derived quantities, e.g the average performance/error) will be close to the real value (provided a sufficiently large sample). Or at least, according to the Glivenko-Cantelli theorem the empirical distribution can be made as close to the real distribution as we want by increasing the size of the sample size (the validation data set). Since the convergence of the empirical distribution towards the true distribution of the performance/error is uniform, any derived quantity (e.g. the mean performance/error) will also convergence towards the true value (in the case of the mean one could also use the law of large numbers). So the 'theoretical guarantee' is the law of large numbers, or more general the Glivenko-Cantelli theorem. Note 1: On a side note, I have heard that the "attractive theoretical promises" made by the Central Limit Theorem and the Bootstrap Method don't tend to be as "attractive" in reality... That's why statistics is not simply mathematics. Indeed this theoretical guarantee is only a guarantee that the estimates are consistent. It means that the estimates convergence to the true value, but the practical use might be low if the rate of convergence is slow, or if the initial variance is large.
How did Cross Validation become the "Golden Standard" of Measuring the Performance of Statistical Mo Consider a population $Y|X$ that follows some distribution according to a true model, and you have a set of trained models $f(X,\theta)$ that make predictions of $Y$ given $X$ and are parameterized by
24,127
How does one ensure Machine Learning doesn't come to correct classifications via the wrong ways?
One problem in ML is when it uses predictors we do not want it to use, like gender or ethnicity. And even if these are not fed into the model, we may still have predictors that do correlate with these factors, like ZIP codes correlating with ethnicity, or colleges correlating with gender. Assuming that, say, gender does correlate with the outcome we are modeling, then even if we do not feed in genders, but only the college attended, and some colleges are traditionally gender-imbalanced, then we will overall get different classifications or predictions for men than for women. This particular case can be found out by slicing you dataset by gender and checking the outcome predictions, while ignoring all other pieces of information. Unfortunately, this is not simple to do, because the model is not using gender (which we didn't feed in). It's using the college attended, which in turn is correlated with the outcome. Does it even make sense to only slice the dataset in a single dimension like gender, while ignoring possible mediators or confounders, like the college? Is the problem that students from college A perform worse, and that men predominantly attend college A, or is the problem that men perform worse, and that they predominantly attend college A? And which predictor represents a "wrong" way for the model to come to predictions? And then, of course, all this is mixed up with the question of whether the original problem is that the training data already exhibits the results of bias. Maybe male students from college A historically performed worse because there was always a hiring bias against students from college A. Or, conversely, a bias against men. There is no easy solution to this, because it is rarely possible to tease out the "real" effect of bias in the training sample from any true underlying differences. Bottom line: there is no simple way to find out whether your model arrived at the "correct" predictions ("men perform worse") through "correct" ways (men indeed perform worse) or through "wrong" ways (men predominantly attend college A, and students from college A perform worse). In particular, there is no way you could test programmatically. Your best bet is likely to subject your model to various stress tests, and have a plan on how to react if you go into production and someone detects a flaw you didn't think of.
How does one ensure Machine Learning doesn't come to correct classifications via the wrong ways?
One problem in ML is when it uses predictors we do not want it to use, like gender or ethnicity. And even if these are not fed into the model, we may still have predictors that do correlate with these
How does one ensure Machine Learning doesn't come to correct classifications via the wrong ways? One problem in ML is when it uses predictors we do not want it to use, like gender or ethnicity. And even if these are not fed into the model, we may still have predictors that do correlate with these factors, like ZIP codes correlating with ethnicity, or colleges correlating with gender. Assuming that, say, gender does correlate with the outcome we are modeling, then even if we do not feed in genders, but only the college attended, and some colleges are traditionally gender-imbalanced, then we will overall get different classifications or predictions for men than for women. This particular case can be found out by slicing you dataset by gender and checking the outcome predictions, while ignoring all other pieces of information. Unfortunately, this is not simple to do, because the model is not using gender (which we didn't feed in). It's using the college attended, which in turn is correlated with the outcome. Does it even make sense to only slice the dataset in a single dimension like gender, while ignoring possible mediators or confounders, like the college? Is the problem that students from college A perform worse, and that men predominantly attend college A, or is the problem that men perform worse, and that they predominantly attend college A? And which predictor represents a "wrong" way for the model to come to predictions? And then, of course, all this is mixed up with the question of whether the original problem is that the training data already exhibits the results of bias. Maybe male students from college A historically performed worse because there was always a hiring bias against students from college A. Or, conversely, a bias against men. There is no easy solution to this, because it is rarely possible to tease out the "real" effect of bias in the training sample from any true underlying differences. Bottom line: there is no simple way to find out whether your model arrived at the "correct" predictions ("men perform worse") through "correct" ways (men indeed perform worse) or through "wrong" ways (men predominantly attend college A, and students from college A perform worse). In particular, there is no way you could test programmatically. Your best bet is likely to subject your model to various stress tests, and have a plan on how to react if you go into production and someone detects a flaw you didn't think of.
How does one ensure Machine Learning doesn't come to correct classifications via the wrong ways? One problem in ML is when it uses predictors we do not want it to use, like gender or ethnicity. And even if these are not fed into the model, we may still have predictors that do correlate with these
24,128
How does one ensure Machine Learning doesn't come to correct classifications via the wrong ways?
+1: extremely deep question! I will repeat the advice I got from my advisor, without necessarily understanding it! This was years ago so I am probably misrepresenting it. The problem was classification at the presence of some obvious noise (negative signal power levels). I asked: "How should I filter the noise out?". He answered "You don't. Firstly, the ML model is going to do that for you. Secondly, the noise profile is valuable training data for the model." I am not happy with that but do believe he's extremely competent at what he does.
How does one ensure Machine Learning doesn't come to correct classifications via the wrong ways?
+1: extremely deep question! I will repeat the advice I got from my advisor, without necessarily understanding it! This was years ago so I am probably misrepresenting it. The problem was classificatio
How does one ensure Machine Learning doesn't come to correct classifications via the wrong ways? +1: extremely deep question! I will repeat the advice I got from my advisor, without necessarily understanding it! This was years ago so I am probably misrepresenting it. The problem was classification at the presence of some obvious noise (negative signal power levels). I asked: "How should I filter the noise out?". He answered "You don't. Firstly, the ML model is going to do that for you. Secondly, the noise profile is valuable training data for the model." I am not happy with that but do believe he's extremely competent at what he does.
How does one ensure Machine Learning doesn't come to correct classifications via the wrong ways? +1: extremely deep question! I will repeat the advice I got from my advisor, without necessarily understanding it! This was years ago so I am probably misrepresenting it. The problem was classificatio
24,129
How does one ensure Machine Learning doesn't come to correct classifications via the wrong ways?
This is the purpose of the Validation Set. Split your dataset in 3 : Train, Test and Validation. Never touch your Validation set again until the last phase. Create your model using Train and Test, train your encoders, make your preprocesses, create your variables etc. Then create your model and tune it using Train and Test. When you're satisfied and have the model you want, then apply your encoders, preprocesses on your Validation set, which acts as totally new and unknown Data from the model, the same type as you'd have on a real case. Apply your model on those brand new data (that you didn't use as a reference to train your model, as you did with Test) to have an overlook on how your model will perform in real time with new data. This can be shown easily to the audience : Ask them to give you the latest examples they have, remove the final answer from the data, and run your model on it. You'll have precise results on how your model works on fresh data. If they're still doubting, ask for a test phase in which you run your algorithm day to day and in real time, without replacing their current system for the moment, so you can compare results at the end of the test phase. Another thing to check is being sure you don't use variables that you shouldn't have in real case, or that you shouldn't know without knowing the target. That's a classic case of having a good classification from a wrong way.
How does one ensure Machine Learning doesn't come to correct classifications via the wrong ways?
This is the purpose of the Validation Set. Split your dataset in 3 : Train, Test and Validation. Never touch your Validation set again until the last phase. Create your model using Train and Test, tra
How does one ensure Machine Learning doesn't come to correct classifications via the wrong ways? This is the purpose of the Validation Set. Split your dataset in 3 : Train, Test and Validation. Never touch your Validation set again until the last phase. Create your model using Train and Test, train your encoders, make your preprocesses, create your variables etc. Then create your model and tune it using Train and Test. When you're satisfied and have the model you want, then apply your encoders, preprocesses on your Validation set, which acts as totally new and unknown Data from the model, the same type as you'd have on a real case. Apply your model on those brand new data (that you didn't use as a reference to train your model, as you did with Test) to have an overlook on how your model will perform in real time with new data. This can be shown easily to the audience : Ask them to give you the latest examples they have, remove the final answer from the data, and run your model on it. You'll have precise results on how your model works on fresh data. If they're still doubting, ask for a test phase in which you run your algorithm day to day and in real time, without replacing their current system for the moment, so you can compare results at the end of the test phase. Another thing to check is being sure you don't use variables that you shouldn't have in real case, or that you shouldn't know without knowing the target. That's a classic case of having a good classification from a wrong way.
How does one ensure Machine Learning doesn't come to correct classifications via the wrong ways? This is the purpose of the Validation Set. Split your dataset in 3 : Train, Test and Validation. Never touch your Validation set again until the last phase. Create your model using Train and Test, tra
24,130
Why ordinal target in classification problems need special attention?
Dave's comments are on the right track. I'll try to expand on them. Ordinal regression is half-way between classification and real-valued regression. When you perform multiclass classification of your ordinal data, you are assigning the same penalty whenever your classifier predicts a wrong class, no matter which one. For example, assume that in your problem for some input vector $x$ the right prediction is $a$. Assume you are training two classifiers, $C_1$ and $C_2$. The first one predicts $b$, while the other predicts $d$. In the multivariate classifier's sense, $C_1$ and $C_2$ are equally far off, they have missed the correct class. But from the ordinal regression perspective, $C_1$ is obviously better than $C_2$, since it has missed the correct "class" only by one bin, not by three. To drive this point into extreme, imagine performing a very-many-classes-classification instead of regression. I.e. you have predictors $x$ and a real-valued response variable $y$. You can treat values of $y$ as classes: $y = 3.14159$ would be one class, $y = 1.4142$ another, and so on. If you had $N$ observations, you're likely to have $N$ different classes (assuming all $y$'s differ). You could try to train a multiclass classifier, but you'd be likely to fail, as there would be only one observation per class. And even if you succeeded (because you were lucky to have same $y$'s repeat multiple times), you'd be essentially having many independent models, where each would only predict its own class and wouldn't care much about the others. Such an ensemble of models would also be quite complex. If each model has, say, $M$ parameters, and if you had $K$ classes to predict $(K < N)$, your ensemble would have $M \cdot K$ parameters. In contrast, the complexity of the regression model is likely to be independent of the number of distinct $y$ values. You'd settle in advance for a linear, quadratic, or whatever function to fit through your data and the form of the function would determine the number of parameters. In ordinal regression, e.g. proportional odds logistic regression, it is common to have one set of parameters (a vector) common to all "classes" (i.e. ordinal values), and a set of scalars to distinguish between the individual ordinal values. The same holds also for support vector ordinal regression (see Wei Chu - Support Vector Ordinal Regression), where you have the same model, consisting of the same $\alpha$'s (Lagrange coefficients) for all "classes", and distinguish between the classes only by the corresponding $b$'s (one per "class").
Why ordinal target in classification problems need special attention?
Dave's comments are on the right track. I'll try to expand on them. Ordinal regression is half-way between classification and real-valued regression. When you perform multiclass classification of your
Why ordinal target in classification problems need special attention? Dave's comments are on the right track. I'll try to expand on them. Ordinal regression is half-way between classification and real-valued regression. When you perform multiclass classification of your ordinal data, you are assigning the same penalty whenever your classifier predicts a wrong class, no matter which one. For example, assume that in your problem for some input vector $x$ the right prediction is $a$. Assume you are training two classifiers, $C_1$ and $C_2$. The first one predicts $b$, while the other predicts $d$. In the multivariate classifier's sense, $C_1$ and $C_2$ are equally far off, they have missed the correct class. But from the ordinal regression perspective, $C_1$ is obviously better than $C_2$, since it has missed the correct "class" only by one bin, not by three. To drive this point into extreme, imagine performing a very-many-classes-classification instead of regression. I.e. you have predictors $x$ and a real-valued response variable $y$. You can treat values of $y$ as classes: $y = 3.14159$ would be one class, $y = 1.4142$ another, and so on. If you had $N$ observations, you're likely to have $N$ different classes (assuming all $y$'s differ). You could try to train a multiclass classifier, but you'd be likely to fail, as there would be only one observation per class. And even if you succeeded (because you were lucky to have same $y$'s repeat multiple times), you'd be essentially having many independent models, where each would only predict its own class and wouldn't care much about the others. Such an ensemble of models would also be quite complex. If each model has, say, $M$ parameters, and if you had $K$ classes to predict $(K < N)$, your ensemble would have $M \cdot K$ parameters. In contrast, the complexity of the regression model is likely to be independent of the number of distinct $y$ values. You'd settle in advance for a linear, quadratic, or whatever function to fit through your data and the form of the function would determine the number of parameters. In ordinal regression, e.g. proportional odds logistic regression, it is common to have one set of parameters (a vector) common to all "classes" (i.e. ordinal values), and a set of scalars to distinguish between the individual ordinal values. The same holds also for support vector ordinal regression (see Wei Chu - Support Vector Ordinal Regression), where you have the same model, consisting of the same $\alpha$'s (Lagrange coefficients) for all "classes", and distinguish between the classes only by the corresponding $b$'s (one per "class").
Why ordinal target in classification problems need special attention? Dave's comments are on the right track. I'll try to expand on them. Ordinal regression is half-way between classification and real-valued regression. When you perform multiclass classification of your
24,131
Why ordinal target in classification problems need special attention?
I'd argue that there are two potential complications with discarding the ordering information and just running multiclass regression: Model complexity: the parameters of the predictions for each category won't be tied together in any way. So the model is trying to learn how to predict category a, b, c, and d as four separate problems, without realizing that there is some structure (e.g. examples of class a will look more similar to those of class b than those of class c or d). This could lead to poor performance on relatively small datasets. As mentioned in the comments above, the multiclass loss function doesn't take the ordering into account. Presumably in your application mispredicting class a as class b is less bad than mispredicting it as class c. I can think of a few possible solutions to these issues. One option is to define a similarity matrix across conditions in some way. This could come from some measure of uncertainty between the categories (e.g. a confusion matrix from human labels), or be related to the consequences for misprediction (e.g. if class d is a very elite class that gets a special credit score, that could be designated as less similar to the other classes). This kind of similarity matrix could be used to solve both problems listed above, if it is used for regularization (encouraging similar classes to have similar parameters) and for loss (penalizing mispredictions less harshly for similar classes). Another possible answer is to just abandon ordinal prediction entirely, and try to predict the amount of money as a continuous value (which you could then discretize into bins if you wanted to). You still might need to think carefully about the loss function (e.g. if these values span multiple orders of magnitude you may want to penalize squared loss of the log of the values, rather than the values themselves).
Why ordinal target in classification problems need special attention?
I'd argue that there are two potential complications with discarding the ordering information and just running multiclass regression: Model complexity: the parameters of the predictions for each cate
Why ordinal target in classification problems need special attention? I'd argue that there are two potential complications with discarding the ordering information and just running multiclass regression: Model complexity: the parameters of the predictions for each category won't be tied together in any way. So the model is trying to learn how to predict category a, b, c, and d as four separate problems, without realizing that there is some structure (e.g. examples of class a will look more similar to those of class b than those of class c or d). This could lead to poor performance on relatively small datasets. As mentioned in the comments above, the multiclass loss function doesn't take the ordering into account. Presumably in your application mispredicting class a as class b is less bad than mispredicting it as class c. I can think of a few possible solutions to these issues. One option is to define a similarity matrix across conditions in some way. This could come from some measure of uncertainty between the categories (e.g. a confusion matrix from human labels), or be related to the consequences for misprediction (e.g. if class d is a very elite class that gets a special credit score, that could be designated as less similar to the other classes). This kind of similarity matrix could be used to solve both problems listed above, if it is used for regularization (encouraging similar classes to have similar parameters) and for loss (penalizing mispredictions less harshly for similar classes). Another possible answer is to just abandon ordinal prediction entirely, and try to predict the amount of money as a continuous value (which you could then discretize into bins if you wanted to). You still might need to think carefully about the loss function (e.g. if these values span multiple orders of magnitude you may want to penalize squared loss of the log of the values, rather than the values themselves).
Why ordinal target in classification problems need special attention? I'd argue that there are two potential complications with discarding the ordering information and just running multiclass regression: Model complexity: the parameters of the predictions for each cate
24,132
Why is an unbiased random walk non-ergodic?
That Wikipedia article writes, The process $X(t)$ is said to be mean-ergodic or mean-square ergodic in the first moment if the time average estimate $${\hat {\mu }}_{X}={\frac {1}{T}}\int _{0}^{T}X(t)\,\mathrm{d}t$$ converges in squared mean to the ensemble average $\mu _{X}$ as $T\rightarrow \infty.$ The problem is that $\hat\mu$ becomes more and more variable as $T$ increases. This becomes apparent when $X(t)$ is the discrete Binomial random walk described in the question, because the time average is $$\hat\mu(X) = \frac{1}{T} \sum_{i=1}^T X(t) = \frac{1}{T} \sum_{i=1}^T \sum_{j=1}^i Z(i) = Z(1) + \frac{T-1}{T}Z(2) + \cdots + \frac{1}{T}Z(T).$$ Notice how the early terms persist: $Z(1)$ appears with coefficient $1$ and the coefficients of the subsequent $Z(i)$ converge to $1$ as $T$ grows. Their contributions to the time average therefore do not get averaged out and consequently the time average cannot converge to a constant. In the context and notation of the Wikipedia article, let's prove this result by finding the mean and variance of the time average. The expectation of $\hat{\mu}_X$ is $$\mathbb{E}(\hat{\mu}_X) = {\frac {1}{T}}\int _{0}^{T}\mathbb{E}(X(t))\,\mathrm{d}t = \frac{1}{T}\int_0^T 0\, \mathrm{d}t = 0.$$ Therefore its variance is the expectation of its square, $$\eqalign{ \operatorname{Var}(\hat{\mu}_X) &= \mathbb{E}\left(\hat{\mu}_X^2\right)\\ &= \mathbb{E}\left({\frac {1}{T}}\int _{0}^{T}\mathbb{E}(X(t))\,\mathrm{d}t \ {\frac {1}{T}}\int _{0}^{T}\mathbb{E}(X(s))\,\mathrm{d}s \right) \\ &= \left(\frac {1}{T}\right)^2 \int_0^T \int_0^T \mathbb{E}(X(t)X(s))\,\mathrm{d}t \mathrm{d}s \\ &= \left(\frac {1}{T}\right)^2 \int_0^T \int_0^T \min(s,t)\,\mathrm{d}t \mathrm{d}s \\ &= \left(\frac {1}{T}\right)^2 \int_0^T \left(\int_0^s t\,\mathrm{d}t + \int_s^T s\,\mathrm{d}t\right)\mathrm{d}s \\ &= \left(\frac {1}{T}\right)^2 \int_0^T \left(\frac{s^2}{2} + (T-s)s\right)\mathrm{d}s \\ &= \left(\frac {1}{T}\right)^2 \frac{T^3}{3} \\ &= \frac{T}{3}. }$$ Because this grows ever larger as $T$ grows, $\hat\mu_X$ cannot possibly converge to a constant as required by the definition of ergodicity--even though it has a constant average of zero. Whence Wikipedia writes (to quote the passage fully), An unbiased random walk is non-ergodic. Its expectation value is zero at all times, whereas its time average is a random variable with divergent variance.
Why is an unbiased random walk non-ergodic?
That Wikipedia article writes, The process $X(t)$ is said to be mean-ergodic or mean-square ergodic in the first moment if the time average estimate $${\hat {\mu }}_{X}={\frac {1}{T}}\int _{0}^{T}X(t
Why is an unbiased random walk non-ergodic? That Wikipedia article writes, The process $X(t)$ is said to be mean-ergodic or mean-square ergodic in the first moment if the time average estimate $${\hat {\mu }}_{X}={\frac {1}{T}}\int _{0}^{T}X(t)\,\mathrm{d}t$$ converges in squared mean to the ensemble average $\mu _{X}$ as $T\rightarrow \infty.$ The problem is that $\hat\mu$ becomes more and more variable as $T$ increases. This becomes apparent when $X(t)$ is the discrete Binomial random walk described in the question, because the time average is $$\hat\mu(X) = \frac{1}{T} \sum_{i=1}^T X(t) = \frac{1}{T} \sum_{i=1}^T \sum_{j=1}^i Z(i) = Z(1) + \frac{T-1}{T}Z(2) + \cdots + \frac{1}{T}Z(T).$$ Notice how the early terms persist: $Z(1)$ appears with coefficient $1$ and the coefficients of the subsequent $Z(i)$ converge to $1$ as $T$ grows. Their contributions to the time average therefore do not get averaged out and consequently the time average cannot converge to a constant. In the context and notation of the Wikipedia article, let's prove this result by finding the mean and variance of the time average. The expectation of $\hat{\mu}_X$ is $$\mathbb{E}(\hat{\mu}_X) = {\frac {1}{T}}\int _{0}^{T}\mathbb{E}(X(t))\,\mathrm{d}t = \frac{1}{T}\int_0^T 0\, \mathrm{d}t = 0.$$ Therefore its variance is the expectation of its square, $$\eqalign{ \operatorname{Var}(\hat{\mu}_X) &= \mathbb{E}\left(\hat{\mu}_X^2\right)\\ &= \mathbb{E}\left({\frac {1}{T}}\int _{0}^{T}\mathbb{E}(X(t))\,\mathrm{d}t \ {\frac {1}{T}}\int _{0}^{T}\mathbb{E}(X(s))\,\mathrm{d}s \right) \\ &= \left(\frac {1}{T}\right)^2 \int_0^T \int_0^T \mathbb{E}(X(t)X(s))\,\mathrm{d}t \mathrm{d}s \\ &= \left(\frac {1}{T}\right)^2 \int_0^T \int_0^T \min(s,t)\,\mathrm{d}t \mathrm{d}s \\ &= \left(\frac {1}{T}\right)^2 \int_0^T \left(\int_0^s t\,\mathrm{d}t + \int_s^T s\,\mathrm{d}t\right)\mathrm{d}s \\ &= \left(\frac {1}{T}\right)^2 \int_0^T \left(\frac{s^2}{2} + (T-s)s\right)\mathrm{d}s \\ &= \left(\frac {1}{T}\right)^2 \frac{T^3}{3} \\ &= \frac{T}{3}. }$$ Because this grows ever larger as $T$ grows, $\hat\mu_X$ cannot possibly converge to a constant as required by the definition of ergodicity--even though it has a constant average of zero. Whence Wikipedia writes (to quote the passage fully), An unbiased random walk is non-ergodic. Its expectation value is zero at all times, whereas its time average is a random variable with divergent variance.
Why is an unbiased random walk non-ergodic? That Wikipedia article writes, The process $X(t)$ is said to be mean-ergodic or mean-square ergodic in the first moment if the time average estimate $${\hat {\mu }}_{X}={\frac {1}{T}}\int _{0}^{T}X(t
24,133
Does multicollinearity increase the variance of the beta for every covariate or just those that are collinear?
I'd vote for singular values/eigenvalues/eigenvectors over determinants and adjugates for the way to approach this. TLDR: standard errors increase as the eigenvalues of $X^TX$ get increasingly small and this corresponds to the formation of valleys in the loss surface representing our increasing inability to separate out candidate $\hat\beta$ values. We're looking to minimize $\|y - Xb\|^2$ over $b\in\mathbb R^p$. Let $X = UDV^T$ be the SVD of $X$. As $X$ gets increasingly close to reduced rank we'll have $d_p\to 0$ (at least) where $d_p$ is the smallest singular value. This reflects the fact that $X$ is getting closer and closer to having a non-trivial null space, which would include (at least) $\text{span}(v_p)$, with $v_p$ being the smallest right singular vector or equivalently smallest eigenvector of $X^TX$. This means that once we've got $\hat\beta$ we could get an almost identical loss by replacing $\hat\beta$ with $\hat\beta + \alpha v_p$ for $\alpha \in \mathbb R$. This shows that there is a whole affine subspace of almost equal loss (at least for modest values of $\alpha$), and as $d_p\to 0$ the loss will become increasingly equivalent over that subspace until we are truly unable to pick an element from it since they all have identical loss. This is one way to picture high variance: when there are very different values of $b$ leading to an almost identical loss, slight perturbations in the data can lead to very different $\hat\beta$s which is basically what high variance means. This analysis also tells us that, while some individual coordinates of $\hat\beta$ may get high variance, it's really about the coordinates of $\hat\beta$ expressed w.r.t. the basis given by $V$. Here's an example. I'll build $X$ by picking $U$, $D$, and $V$. Let $$ V = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1/\sqrt 2 & 1/\sqrt 2 \\ 0 & 1/\sqrt 2 & -1/\sqrt 2 \end{bmatrix} $$ $$ D = \text{diag}(2, 1.7, .01) $$ and let $U$ be any matrix in $\mathbb R^{n\times 3}$ with orthogonal columns. This leads to $$ (X^TX)^{-1} = VD^{-2}V^T \approx \begin{bmatrix} 1/4 & 0 & 0 \\ 0 & 5000 & -5000 \\ 0 & -5000 & 5000\end{bmatrix} $$ so $\hat\beta_1$ will have a very modest variance but $\hat\beta_2$ and $\hat\beta_3$ have huge variances, and this is because $Xv_3 \approx \mathbf 0$ so $\hat\beta$ can be perturbed along $(0,1,-1)^T$ with only a small change in loss. So it is true that their individual variances get large but I think this is much more fundamental.
Does multicollinearity increase the variance of the beta for every covariate or just those that are
I'd vote for singular values/eigenvalues/eigenvectors over determinants and adjugates for the way to approach this. TLDR: standard errors increase as the eigenvalues of $X^TX$ get increasingly small a
Does multicollinearity increase the variance of the beta for every covariate or just those that are collinear? I'd vote for singular values/eigenvalues/eigenvectors over determinants and adjugates for the way to approach this. TLDR: standard errors increase as the eigenvalues of $X^TX$ get increasingly small and this corresponds to the formation of valleys in the loss surface representing our increasing inability to separate out candidate $\hat\beta$ values. We're looking to minimize $\|y - Xb\|^2$ over $b\in\mathbb R^p$. Let $X = UDV^T$ be the SVD of $X$. As $X$ gets increasingly close to reduced rank we'll have $d_p\to 0$ (at least) where $d_p$ is the smallest singular value. This reflects the fact that $X$ is getting closer and closer to having a non-trivial null space, which would include (at least) $\text{span}(v_p)$, with $v_p$ being the smallest right singular vector or equivalently smallest eigenvector of $X^TX$. This means that once we've got $\hat\beta$ we could get an almost identical loss by replacing $\hat\beta$ with $\hat\beta + \alpha v_p$ for $\alpha \in \mathbb R$. This shows that there is a whole affine subspace of almost equal loss (at least for modest values of $\alpha$), and as $d_p\to 0$ the loss will become increasingly equivalent over that subspace until we are truly unable to pick an element from it since they all have identical loss. This is one way to picture high variance: when there are very different values of $b$ leading to an almost identical loss, slight perturbations in the data can lead to very different $\hat\beta$s which is basically what high variance means. This analysis also tells us that, while some individual coordinates of $\hat\beta$ may get high variance, it's really about the coordinates of $\hat\beta$ expressed w.r.t. the basis given by $V$. Here's an example. I'll build $X$ by picking $U$, $D$, and $V$. Let $$ V = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1/\sqrt 2 & 1/\sqrt 2 \\ 0 & 1/\sqrt 2 & -1/\sqrt 2 \end{bmatrix} $$ $$ D = \text{diag}(2, 1.7, .01) $$ and let $U$ be any matrix in $\mathbb R^{n\times 3}$ with orthogonal columns. This leads to $$ (X^TX)^{-1} = VD^{-2}V^T \approx \begin{bmatrix} 1/4 & 0 & 0 \\ 0 & 5000 & -5000 \\ 0 & -5000 & 5000\end{bmatrix} $$ so $\hat\beta_1$ will have a very modest variance but $\hat\beta_2$ and $\hat\beta_3$ have huge variances, and this is because $Xv_3 \approx \mathbf 0$ so $\hat\beta$ can be perturbed along $(0,1,-1)^T$ with only a small change in loss. So it is true that their individual variances get large but I think this is much more fundamental.
Does multicollinearity increase the variance of the beta for every covariate or just those that are I'd vote for singular values/eigenvalues/eigenvectors over determinants and adjugates for the way to approach this. TLDR: standard errors increase as the eigenvalues of $X^TX$ get increasingly small a
24,134
t-SNE with mixed continuous and binary variables
Disclaimer: I only have tangential knowledge on the topic, but since no one else answered, I will give it a try Distance is important Any dimensionality reduction technique based on distances (tSNE, UMAP, MDS, PCoA and possibly others) is only as good as the distance metric you use. As @amoeba correctly points out, there cannot be one-size-fits-all solution, you need to have a distance metric that captures what you deem important in the data, i.e. that rows you would consider similar have small distance and rows you would consider different have large distance. How do you choose a good distance metric? First, let me do a little diversion: Ordination Well before the glory days of modern machine learning, community ecologists (and quite likely others) have tried to make nice plots for exploratory analysis of multidimensional data. They call the process ordination and it is a useful keyword to search for in the ecology literature going back at least to the 70s and still going strong today. The important thing is that ecologists have a very diverse datasets and deal with mixtures of binary, integer and real-valued features (e.g. presence/absence of species, number of observed specimens, pH, temperature). They've spent a lot of time thinking about distances and transformations to make ordinations work well. I do not understand the field very well, but for example the review by Legendre and De Cáceres Beta diversity as the variance of community data: dissimilaritycoefficients and partitioning shows an overwhelming number of possible distances you might want to check out. Multidimensional scaling The go-to tool for ordination is multi-dimensional scaling (MDS), especially the non-metric variant (NMDS) which I encourage you to try in addition to t-SNE. I don't know about the Python world, but the R implementation in metaMDS function of the vegan package does a lot of tricks for you (e.g. running multiple runs until it finds two that are similar). This has been disputed, see comments: The nice part about MDS is that it also projects the features (columns), so you can see which features drive the dimensionality reduction. This helps you to interpret your data. Keep in mind that t-SNE has been criticized as a tool to derive understanding see e.g. this exploration of its pitfalls - I've heard UMAP solves some of the issues, but I have no experience with UMAP. I also don't doubt part of the reason ecologists use NMDS is culture and inertia, maybe UMAP or t-SNE are actually better. I honestly don't know. Rolling out your own distance If you understand the structure of your data, the ready-made distances and transformations might not be best for you and you might want to build a custom distance metric. While I don't know what your data represent, it might be sensible to compute distance separately for the real-valued variables (e.g. using Euclidean distance if that makes sense) and for the binary variables and add them. Common distances for binary data are for example Jaccard distance or Cosine distance. You might need to think about some multiplicative coefficient for the distances as Jaccard and Cosine both have values in $[0,1]$ regardless of the number of features while the magnitude of Euclidean distance reflects the number of features. A word of caution All the time you should keep in mind that since you have so many knobs to tune, you can easily fall into the trap of tuning until you see what you wanted to see. This is difficult to avoid completely in exploratory analysis, but you should be cautious.
t-SNE with mixed continuous and binary variables
Disclaimer: I only have tangential knowledge on the topic, but since no one else answered, I will give it a try Distance is important Any dimensionality reduction technique based on distances (tSNE, U
t-SNE with mixed continuous and binary variables Disclaimer: I only have tangential knowledge on the topic, but since no one else answered, I will give it a try Distance is important Any dimensionality reduction technique based on distances (tSNE, UMAP, MDS, PCoA and possibly others) is only as good as the distance metric you use. As @amoeba correctly points out, there cannot be one-size-fits-all solution, you need to have a distance metric that captures what you deem important in the data, i.e. that rows you would consider similar have small distance and rows you would consider different have large distance. How do you choose a good distance metric? First, let me do a little diversion: Ordination Well before the glory days of modern machine learning, community ecologists (and quite likely others) have tried to make nice plots for exploratory analysis of multidimensional data. They call the process ordination and it is a useful keyword to search for in the ecology literature going back at least to the 70s and still going strong today. The important thing is that ecologists have a very diverse datasets and deal with mixtures of binary, integer and real-valued features (e.g. presence/absence of species, number of observed specimens, pH, temperature). They've spent a lot of time thinking about distances and transformations to make ordinations work well. I do not understand the field very well, but for example the review by Legendre and De Cáceres Beta diversity as the variance of community data: dissimilaritycoefficients and partitioning shows an overwhelming number of possible distances you might want to check out. Multidimensional scaling The go-to tool for ordination is multi-dimensional scaling (MDS), especially the non-metric variant (NMDS) which I encourage you to try in addition to t-SNE. I don't know about the Python world, but the R implementation in metaMDS function of the vegan package does a lot of tricks for you (e.g. running multiple runs until it finds two that are similar). This has been disputed, see comments: The nice part about MDS is that it also projects the features (columns), so you can see which features drive the dimensionality reduction. This helps you to interpret your data. Keep in mind that t-SNE has been criticized as a tool to derive understanding see e.g. this exploration of its pitfalls - I've heard UMAP solves some of the issues, but I have no experience with UMAP. I also don't doubt part of the reason ecologists use NMDS is culture and inertia, maybe UMAP or t-SNE are actually better. I honestly don't know. Rolling out your own distance If you understand the structure of your data, the ready-made distances and transformations might not be best for you and you might want to build a custom distance metric. While I don't know what your data represent, it might be sensible to compute distance separately for the real-valued variables (e.g. using Euclidean distance if that makes sense) and for the binary variables and add them. Common distances for binary data are for example Jaccard distance or Cosine distance. You might need to think about some multiplicative coefficient for the distances as Jaccard and Cosine both have values in $[0,1]$ regardless of the number of features while the magnitude of Euclidean distance reflects the number of features. A word of caution All the time you should keep in mind that since you have so many knobs to tune, you can easily fall into the trap of tuning until you see what you wanted to see. This is difficult to avoid completely in exploratory analysis, but you should be cautious.
t-SNE with mixed continuous and binary variables Disclaimer: I only have tangential knowledge on the topic, but since no one else answered, I will give it a try Distance is important Any dimensionality reduction technique based on distances (tSNE, U
24,135
What does it mean to freeze or unfreeze a model?
As you guessed at, freezing prevents the weights of a neural network layer from being modified during the backward pass of training. You progressively 'lock-in' the weights for each layer to reduce the amount of computation in the backward pass and decrease training time. You can unfreeze a model if you decide you want to continue training - an example of this is transfer learning: start with a pre-trained model, unfreeze the weights, then continuing training on a different dataset. When you choose to freeze is a balance between freezing early enough to gain computational speed-up without freezing too early with weights that result in inaccurate predictions. The original paper is available on arXiv, it's a good read. FREEZEOUT: ACCELERATE TRAINING BY PROGRESSIVELY FREEZING LAYERS by Andrew Brock, Theodore Lim, J.M. Ritchi and Nick Weston.
What does it mean to freeze or unfreeze a model?
As you guessed at, freezing prevents the weights of a neural network layer from being modified during the backward pass of training. You progressively 'lock-in' the weights for each layer to reduce th
What does it mean to freeze or unfreeze a model? As you guessed at, freezing prevents the weights of a neural network layer from being modified during the backward pass of training. You progressively 'lock-in' the weights for each layer to reduce the amount of computation in the backward pass and decrease training time. You can unfreeze a model if you decide you want to continue training - an example of this is transfer learning: start with a pre-trained model, unfreeze the weights, then continuing training on a different dataset. When you choose to freeze is a balance between freezing early enough to gain computational speed-up without freezing too early with weights that result in inaccurate predictions. The original paper is available on arXiv, it's a good read. FREEZEOUT: ACCELERATE TRAINING BY PROGRESSIVELY FREEZING LAYERS by Andrew Brock, Theodore Lim, J.M. Ritchi and Nick Weston.
What does it mean to freeze or unfreeze a model? As you guessed at, freezing prevents the weights of a neural network layer from being modified during the backward pass of training. You progressively 'lock-in' the weights for each layer to reduce th
24,136
How do children manage to pull their parents together in a PCA projection of a GWAS data set?
During the discussion with @ttnphns in the comments above, I realized that the same phenomenon can be observed with many fewer than 10 families. Three families (n=3 in my code snippet) appear roughly in the corners of an equilateral triangle. In fact, it is enough to consider only two families (n=2): they end up separated along PC1, with each family projected roughly onto one point. The case of two families can be visualized directly. The original four points in the 10,000-dimensional space are nearly orthogonal and reside in a 4-dimensional subspace. So they form a 4-simplex. After centering, they will form a regular tetrahedron which is a shape in 3D. Here is how it looks like: Before the children are added, PC1 can point anywhere; there is no preferred direction. However, after two children are positioned in the centers of two opposite edges, PC1 will go right through them! This arrangement of six points was described by @ttnphns as a "dumbbell": such a cloud, like a dumbbell, will tend to pull the main PCs so that these pierce the heavy regions Note that the opposite edges of a regular tetrahedron are orthogonal to each other and are also orthogonal to the line connecting their centers. This means that each family will be projected to one single point on PC1. Perhaps even less intuitively, if the two children are scaled by the $\sqrt{2}$ factor to give them the same norm as the parents have, then they will "stick out" of the tetrahedron, resulting in PC1 projection with both parents collapsed together and child being further apart. This can be seen in the second figure in my question: each family has its parents really close on the PC1/PC2 plane (EVEN THOUGH THEY ARE UNRELATED!), and their child is a bit further apart.
How do children manage to pull their parents together in a PCA projection of a GWAS data set?
During the discussion with @ttnphns in the comments above, I realized that the same phenomenon can be observed with many fewer than 10 families. Three families (n=3 in my code snippet) appear roughly
How do children manage to pull their parents together in a PCA projection of a GWAS data set? During the discussion with @ttnphns in the comments above, I realized that the same phenomenon can be observed with many fewer than 10 families. Three families (n=3 in my code snippet) appear roughly in the corners of an equilateral triangle. In fact, it is enough to consider only two families (n=2): they end up separated along PC1, with each family projected roughly onto one point. The case of two families can be visualized directly. The original four points in the 10,000-dimensional space are nearly orthogonal and reside in a 4-dimensional subspace. So they form a 4-simplex. After centering, they will form a regular tetrahedron which is a shape in 3D. Here is how it looks like: Before the children are added, PC1 can point anywhere; there is no preferred direction. However, after two children are positioned in the centers of two opposite edges, PC1 will go right through them! This arrangement of six points was described by @ttnphns as a "dumbbell": such a cloud, like a dumbbell, will tend to pull the main PCs so that these pierce the heavy regions Note that the opposite edges of a regular tetrahedron are orthogonal to each other and are also orthogonal to the line connecting their centers. This means that each family will be projected to one single point on PC1. Perhaps even less intuitively, if the two children are scaled by the $\sqrt{2}$ factor to give them the same norm as the parents have, then they will "stick out" of the tetrahedron, resulting in PC1 projection with both parents collapsed together and child being further apart. This can be seen in the second figure in my question: each family has its parents really close on the PC1/PC2 plane (EVEN THOUGH THEY ARE UNRELATED!), and their child is a bit further apart.
How do children manage to pull their parents together in a PCA projection of a GWAS data set? During the discussion with @ttnphns in the comments above, I realized that the same phenomenon can be observed with many fewer than 10 families. Three families (n=3 in my code snippet) appear roughly
24,137
Handling NAs in a regression ?? Data Flags?
The "flagging method"—often called the "dummy variable method" or "indicator variable method"—is used mostly to encode predictors with not applicable values. It can be used to encode predictors with missing values; when you're interested in making predictions for new data-sets rather than inferences about parameters, & when the missingness mechanism is presumed to be the same in the samples for which you're making predictions. The problem is that you're fitting a different model in which the non-missing slopes don't equate to the "true" slopes in a model in which all predictors are non-missing.† See e.g. Jones (1996), "Indicator and Stratification Methods for Missing Explanatory Variables in Multiple Linear Regression", JASA, 91, 433. (An exception is in experimental studies in which predictors are orthogonal by design.) Note that you can set the missing values to an arbitrary number, not just zero, for maximum-likelihood procedures. † Suppose the model of interest is $$\eta=\beta_0 + \beta_1 x_1 + \beta_2 x_2$$ where $\eta$ is the linear predictor. Now you introduce $x_3$ as an indicator for missingness in $x_2$: the model becomes $$\eta=\beta'_0 + \beta'_1 x_1 + \beta'_2 x_2 + \beta'_3 x_3$$ When $x_2$ is not missing you set $x_3$ to $0$: $$\eta=\beta'_0 + \beta'_1 x_1 + \beta'_2 x_2$$ When $x_2$ is missing you set $x_3$ to $1$ & $x_2$ to an arbitrary constant $c$: $$\eta=\beta'_0 + \beta'_1 x_1 + \beta'_2 c + \beta'_3$$ Clearly when $x_2$ is missing, the slope of $x_1$ is no longer conditional on $x_2$; overall $\beta'_1$ is an average of conditional & marginal slopes. In general $\beta'_1 \neq \beta_1$.
Handling NAs in a regression ?? Data Flags?
The "flagging method"—often called the "dummy variable method" or "indicator variable method"—is used mostly to encode predictors with not applicable values. It can be used to encode predictors with m
Handling NAs in a regression ?? Data Flags? The "flagging method"—often called the "dummy variable method" or "indicator variable method"—is used mostly to encode predictors with not applicable values. It can be used to encode predictors with missing values; when you're interested in making predictions for new data-sets rather than inferences about parameters, & when the missingness mechanism is presumed to be the same in the samples for which you're making predictions. The problem is that you're fitting a different model in which the non-missing slopes don't equate to the "true" slopes in a model in which all predictors are non-missing.† See e.g. Jones (1996), "Indicator and Stratification Methods for Missing Explanatory Variables in Multiple Linear Regression", JASA, 91, 433. (An exception is in experimental studies in which predictors are orthogonal by design.) Note that you can set the missing values to an arbitrary number, not just zero, for maximum-likelihood procedures. † Suppose the model of interest is $$\eta=\beta_0 + \beta_1 x_1 + \beta_2 x_2$$ where $\eta$ is the linear predictor. Now you introduce $x_3$ as an indicator for missingness in $x_2$: the model becomes $$\eta=\beta'_0 + \beta'_1 x_1 + \beta'_2 x_2 + \beta'_3 x_3$$ When $x_2$ is not missing you set $x_3$ to $0$: $$\eta=\beta'_0 + \beta'_1 x_1 + \beta'_2 x_2$$ When $x_2$ is missing you set $x_3$ to $1$ & $x_2$ to an arbitrary constant $c$: $$\eta=\beta'_0 + \beta'_1 x_1 + \beta'_2 c + \beta'_3$$ Clearly when $x_2$ is missing, the slope of $x_1$ is no longer conditional on $x_2$; overall $\beta'_1$ is an average of conditional & marginal slopes. In general $\beta'_1 \neq \beta_1$.
Handling NAs in a regression ?? Data Flags? The "flagging method"—often called the "dummy variable method" or "indicator variable method"—is used mostly to encode predictors with not applicable values. It can be used to encode predictors with m
24,138
Handling NAs in a regression ?? Data Flags?
There is no way to “ignore” missing data in a regression procedure. You can impute missing data and there are many reference articles on the topic on Crossvalidated. The method you describe does not match a procedure I’m aware of.
Handling NAs in a regression ?? Data Flags?
There is no way to “ignore” missing data in a regression procedure. You can impute missing data and there are many reference articles on the topic on Crossvalidated. The method you describe does not m
Handling NAs in a regression ?? Data Flags? There is no way to “ignore” missing data in a regression procedure. You can impute missing data and there are many reference articles on the topic on Crossvalidated. The method you describe does not match a procedure I’m aware of.
Handling NAs in a regression ?? Data Flags? There is no way to “ignore” missing data in a regression procedure. You can impute missing data and there are many reference articles on the topic on Crossvalidated. The method you describe does not m
24,139
Handling NAs in a regression ?? Data Flags?
I would caution you against replacing missing value with arbitrary values like 1, 0, the mean of the feature, etc. The data is missing and it is not appropriate to fill it in arbitrarily. The approach I take that usually works well is to examine your features. It is likely that a few of your features contain the bulk of the missing data. If this is the case, drop them. Although it's usually nice to have more features, if the data is largely missing from them they are not adding much value anyway. Having dropped the features with the most missing values, you may now drop the rows containing the remaining missing values. Usually this will leave you with a sufficient sample size. If not, consider imputation techniques.
Handling NAs in a regression ?? Data Flags?
I would caution you against replacing missing value with arbitrary values like 1, 0, the mean of the feature, etc. The data is missing and it is not appropriate to fill it in arbitrarily. The approach
Handling NAs in a regression ?? Data Flags? I would caution you against replacing missing value with arbitrary values like 1, 0, the mean of the feature, etc. The data is missing and it is not appropriate to fill it in arbitrarily. The approach I take that usually works well is to examine your features. It is likely that a few of your features contain the bulk of the missing data. If this is the case, drop them. Although it's usually nice to have more features, if the data is largely missing from them they are not adding much value anyway. Having dropped the features with the most missing values, you may now drop the rows containing the remaining missing values. Usually this will leave you with a sufficient sample size. If not, consider imputation techniques.
Handling NAs in a regression ?? Data Flags? I would caution you against replacing missing value with arbitrary values like 1, 0, the mean of the feature, etc. The data is missing and it is not appropriate to fill it in arbitrarily. The approach
24,140
Understanding the shape of confidence interval for polynomial regression (MLR)
The two principal ways of understanding such regression phenomenon are algebraic--by manipulating the Normal equations and formulas for their solution--and geometric. Algebra, as illustrated in the question itself, is good. But there are several useful geometric formulations of regression. In this case, visualizing the $(x,y)$ data in $(x,x^2,y)$ space offers insight that otherwise may be difficult to come by. We pay the price of needing to look at three-dimensional objects, which is difficult to do on a static screen. (I find endlessly rotating images to be annoying and so will not inflict any of those on you, even though they can be helpful.) Thus, this answer might not appeal to everyone. But those willing to add the third dimension with their imagination will be rewarded. I propose to help you out in this endeavor by means of some carefully chosen graphics. Let's begin by visualizing the independent variables. In the quadratic regression model $$y_i = \beta_0 + \beta_1 (x_i) + \beta_2 (x_i^2) + \text{error},\tag{1}$$ the two terms $(x_i)$ and $(x_i^2)$ can vary among observations: they are the independent variables. We can plot all the ordered pairs $(x_i,x_i^2)$ as points in a plane with axes corresponding to $x$ and $x^2.$ It is also revealing to plot all points on the curve of possible ordered pairs $(t,t^2):$ Visualize the responses (dependent variable) in a third dimension by tilting this figure back and using the vertical direction for that dimension. Each response is plotted as a point symbol. These simulated data consist of a stack of ten responses for each of the three $(x,x^2)$ locations shown in the first figure; the possible elevations of each stack are shown with gray vertical lines: Quadratic regression fits a plane to these points. (How do we know that? Because for any choice of parameters $(\beta_0,\beta_1,\beta_2),$ the set of points in $(x,x^2,y)$ space that satisfy equation $(1)$ are the zero set of the function $-\beta_1(x)-\beta_2(x^2)+(1)y-\beta_0,$ which defines a plane perpendicular to the vector $(-\beta_1,-\beta_2,1).$ This bit of analytic geometry buys us some quantitative support for the picture, too: because the parameters used in these illustrations are $\beta_1=-55/8$ and $\beta_2=15/2,$ and both are large compared to $1,$ this plane will be nearly vertical and oriented diagonally in the $(x,x^2)$ plane.) Here is the least-squares plane fitted to these points: On the plane, which we might suppose to have an equation of the form $y=f(x,x^2),$ I have "lifted" the curve $(t,t^2)$ to the curve $$t\to (t, t^2, f(t,t^2))$$ and drawn that in black. Let's tilt everything further back so that only the $x$ and $y$ axes are showing, leaving the $x^2$ axis to drop invisibly down from your screen: You can see how the lifted curve is precisely the desired quadratic regression: it is the locus of all ordered pairs $(x,\hat y)$ where $\hat y$ is the fitted value when the independent variable is set to $x.$ The confidence band for this fitted curve depicts what can happen to the fit when the data points are randomly varied. Without changing the point of view, I have plotted five fitted planes (and their lifted curves) to five independent new sets of data (of which only one is shown): To help you see this better, I have also made the planes nearly transparent. Evidently the lifted curves tend to have mutual intersections near $x \approx 1.75$ and $x \approx 3.$ Let's look at the same thing by hovering above the three-dimensional plot and looking slightly down and along the diagonal axis of the plane. To help you see how the planes change, I have also compressed the vertical dimension. The vertical golden fence shows all the points above the $(t,t^2)$ curve so you can see more easily how it lifts up to all five fitted planes. Conceptually, the confidence band is found by varying the data, which causes the fitted planes to vary, which changes the lifted curves, whence they trace out an envelope of possible fitted values at each value of $(x,x^2).$ Now I believe a clear geometric explanation is possible. Because the points of the form $(x_i,x_i^2)$ nearly line up in their plane, all the fitted planes will rotate (and jiggle a tiny bit) around some common line lying above those points. (Let $\mathcal L$ be the projection of that line down to the $(x,x^2)$ plane: it will closely approximate the curve in the first figure.) When those planes are varied, the amount by which the lifted curve changes (vertically) at any given $(x,x^2)$ location will be directly proportional to the distance $(x,x^2)$ lies from $\mathcal L.$ This figure returns to the original planar perspective to display $\mathcal L$ relative to the curve $t\to(t,t^2)$ in the plane of independent variables. The two points on the curve closest to $\mathcal L$ are marked in red. Here, approximately, is where the fitted planes will tend to be closest as the responses vary randomly. Thus, the lifted curves at the corresponding $x$ values (around $1.7$ and $2.9$) will tend to vary least near these points. Algebraically, finding those "nodal points" is a matter of solving a quadratic equation: thus, at most two of them will exist. We can therefore expect, as a general proposition, that the confidence bands of a quadratic fit to $(x,y)$ data may have up to two places where they come closest together--but no more than that. This analysis conceptually applies to higher-degree polynomial regression, as well as to multiple regression generally. Although we cannot truly "see" more than three dimensions, the mathematics of linear regression guarantee that the intuition derived from two- and three-dimensional plots of the type shown here remains accurate in higher dimensions.
Understanding the shape of confidence interval for polynomial regression (MLR)
The two principal ways of understanding such regression phenomenon are algebraic--by manipulating the Normal equations and formulas for their solution--and geometric. Algebra, as illustrated in the q
Understanding the shape of confidence interval for polynomial regression (MLR) The two principal ways of understanding such regression phenomenon are algebraic--by manipulating the Normal equations and formulas for their solution--and geometric. Algebra, as illustrated in the question itself, is good. But there are several useful geometric formulations of regression. In this case, visualizing the $(x,y)$ data in $(x,x^2,y)$ space offers insight that otherwise may be difficult to come by. We pay the price of needing to look at three-dimensional objects, which is difficult to do on a static screen. (I find endlessly rotating images to be annoying and so will not inflict any of those on you, even though they can be helpful.) Thus, this answer might not appeal to everyone. But those willing to add the third dimension with their imagination will be rewarded. I propose to help you out in this endeavor by means of some carefully chosen graphics. Let's begin by visualizing the independent variables. In the quadratic regression model $$y_i = \beta_0 + \beta_1 (x_i) + \beta_2 (x_i^2) + \text{error},\tag{1}$$ the two terms $(x_i)$ and $(x_i^2)$ can vary among observations: they are the independent variables. We can plot all the ordered pairs $(x_i,x_i^2)$ as points in a plane with axes corresponding to $x$ and $x^2.$ It is also revealing to plot all points on the curve of possible ordered pairs $(t,t^2):$ Visualize the responses (dependent variable) in a third dimension by tilting this figure back and using the vertical direction for that dimension. Each response is plotted as a point symbol. These simulated data consist of a stack of ten responses for each of the three $(x,x^2)$ locations shown in the first figure; the possible elevations of each stack are shown with gray vertical lines: Quadratic regression fits a plane to these points. (How do we know that? Because for any choice of parameters $(\beta_0,\beta_1,\beta_2),$ the set of points in $(x,x^2,y)$ space that satisfy equation $(1)$ are the zero set of the function $-\beta_1(x)-\beta_2(x^2)+(1)y-\beta_0,$ which defines a plane perpendicular to the vector $(-\beta_1,-\beta_2,1).$ This bit of analytic geometry buys us some quantitative support for the picture, too: because the parameters used in these illustrations are $\beta_1=-55/8$ and $\beta_2=15/2,$ and both are large compared to $1,$ this plane will be nearly vertical and oriented diagonally in the $(x,x^2)$ plane.) Here is the least-squares plane fitted to these points: On the plane, which we might suppose to have an equation of the form $y=f(x,x^2),$ I have "lifted" the curve $(t,t^2)$ to the curve $$t\to (t, t^2, f(t,t^2))$$ and drawn that in black. Let's tilt everything further back so that only the $x$ and $y$ axes are showing, leaving the $x^2$ axis to drop invisibly down from your screen: You can see how the lifted curve is precisely the desired quadratic regression: it is the locus of all ordered pairs $(x,\hat y)$ where $\hat y$ is the fitted value when the independent variable is set to $x.$ The confidence band for this fitted curve depicts what can happen to the fit when the data points are randomly varied. Without changing the point of view, I have plotted five fitted planes (and their lifted curves) to five independent new sets of data (of which only one is shown): To help you see this better, I have also made the planes nearly transparent. Evidently the lifted curves tend to have mutual intersections near $x \approx 1.75$ and $x \approx 3.$ Let's look at the same thing by hovering above the three-dimensional plot and looking slightly down and along the diagonal axis of the plane. To help you see how the planes change, I have also compressed the vertical dimension. The vertical golden fence shows all the points above the $(t,t^2)$ curve so you can see more easily how it lifts up to all five fitted planes. Conceptually, the confidence band is found by varying the data, which causes the fitted planes to vary, which changes the lifted curves, whence they trace out an envelope of possible fitted values at each value of $(x,x^2).$ Now I believe a clear geometric explanation is possible. Because the points of the form $(x_i,x_i^2)$ nearly line up in their plane, all the fitted planes will rotate (and jiggle a tiny bit) around some common line lying above those points. (Let $\mathcal L$ be the projection of that line down to the $(x,x^2)$ plane: it will closely approximate the curve in the first figure.) When those planes are varied, the amount by which the lifted curve changes (vertically) at any given $(x,x^2)$ location will be directly proportional to the distance $(x,x^2)$ lies from $\mathcal L.$ This figure returns to the original planar perspective to display $\mathcal L$ relative to the curve $t\to(t,t^2)$ in the plane of independent variables. The two points on the curve closest to $\mathcal L$ are marked in red. Here, approximately, is where the fitted planes will tend to be closest as the responses vary randomly. Thus, the lifted curves at the corresponding $x$ values (around $1.7$ and $2.9$) will tend to vary least near these points. Algebraically, finding those "nodal points" is a matter of solving a quadratic equation: thus, at most two of them will exist. We can therefore expect, as a general proposition, that the confidence bands of a quadratic fit to $(x,y)$ data may have up to two places where they come closest together--but no more than that. This analysis conceptually applies to higher-degree polynomial regression, as well as to multiple regression generally. Although we cannot truly "see" more than three dimensions, the mathematics of linear regression guarantee that the intuition derived from two- and three-dimensional plots of the type shown here remains accurate in higher dimensions.
Understanding the shape of confidence interval for polynomial regression (MLR) The two principal ways of understanding such regression phenomenon are algebraic--by manipulating the Normal equations and formulas for their solution--and geometric. Algebra, as illustrated in the q
24,141
Understanding the shape of confidence interval for polynomial regression (MLR)
Intuitive In a very intuitive and rough sense you might see the polynomial curve as two linear curves stitched together (one rising one decreasing). For these linear curves you may remember the narrow shape in the center. The points on the left of the peak have relatively little influence on the predictions on the right of the peak, and vice-versa. So you might expect two narrow regions on both sides of the peak (where changes in the slopes of both sides have relatively little effect). The region around the peak is relatively more uncertain because a change in the slope of the curve has a larger effect in this region. You can draw many curves with a large shift of the peak which still goes reasonably trough the measurement points Illustration Below is an illustration with some different data, which shows more easily how this pattern (you could say a double knot) can arise: set.seed(1) x <- c(rep(c(-6, -5, 6, 5), 5)) y <- 0.2*x^2 + rnorm(20, 0, 1) plot(x, y, ylim=c(-10,30), xlim=c(-10,10), pch=21, col=1, bg=1, cex=0.3) data = list(y=y, x=x, x2=x^2) newdata = list(y=rep(0,3001), x=seq(-15,15,0.01), x2=seq(-15,15,0.01)^2 ) model <- lm(y~1+x+x2, data=data) predictions = predict(model, newdata = newdata, interval="predict") lines(newdata$x, predictions[,1]) lines(newdata$x, predictions[,2], lty=2) lines(newdata$x, predictions[,3], lty=2) Formal To be continued: I will place a section later with as more formal explanation. One should be able to express the influence of a specific measurement point on the confidence interval at different places $x$. In this expression one should see more clearly (explicit) how a change of a certain (random) measurement point has more influence on the error in the interpolated area further away from the measurements points I currently can not grasp a good image of the wavy pattern of prediction intervals, but I hope that this rough idea sufficiently addresses Whuber's comment about not recognizing this pattern in quadratic fits. It is not so much about quadratic fits and more about interpolation in general, in those cases the accuracy is less strong for predictions when they are expressed far away from the points, regardless of interpolation or extrapolation. (Certainly this pattern is more reduced when more measurement points, different $x$, are added)
Understanding the shape of confidence interval for polynomial regression (MLR)
Intuitive In a very intuitive and rough sense you might see the polynomial curve as two linear curves stitched together (one rising one decreasing). For these linear curves you may remember the narrow
Understanding the shape of confidence interval for polynomial regression (MLR) Intuitive In a very intuitive and rough sense you might see the polynomial curve as two linear curves stitched together (one rising one decreasing). For these linear curves you may remember the narrow shape in the center. The points on the left of the peak have relatively little influence on the predictions on the right of the peak, and vice-versa. So you might expect two narrow regions on both sides of the peak (where changes in the slopes of both sides have relatively little effect). The region around the peak is relatively more uncertain because a change in the slope of the curve has a larger effect in this region. You can draw many curves with a large shift of the peak which still goes reasonably trough the measurement points Illustration Below is an illustration with some different data, which shows more easily how this pattern (you could say a double knot) can arise: set.seed(1) x <- c(rep(c(-6, -5, 6, 5), 5)) y <- 0.2*x^2 + rnorm(20, 0, 1) plot(x, y, ylim=c(-10,30), xlim=c(-10,10), pch=21, col=1, bg=1, cex=0.3) data = list(y=y, x=x, x2=x^2) newdata = list(y=rep(0,3001), x=seq(-15,15,0.01), x2=seq(-15,15,0.01)^2 ) model <- lm(y~1+x+x2, data=data) predictions = predict(model, newdata = newdata, interval="predict") lines(newdata$x, predictions[,1]) lines(newdata$x, predictions[,2], lty=2) lines(newdata$x, predictions[,3], lty=2) Formal To be continued: I will place a section later with as more formal explanation. One should be able to express the influence of a specific measurement point on the confidence interval at different places $x$. In this expression one should see more clearly (explicit) how a change of a certain (random) measurement point has more influence on the error in the interpolated area further away from the measurements points I currently can not grasp a good image of the wavy pattern of prediction intervals, but I hope that this rough idea sufficiently addresses Whuber's comment about not recognizing this pattern in quadratic fits. It is not so much about quadratic fits and more about interpolation in general, in those cases the accuracy is less strong for predictions when they are expressed far away from the points, regardless of interpolation or extrapolation. (Certainly this pattern is more reduced when more measurement points, different $x$, are added)
Understanding the shape of confidence interval for polynomial regression (MLR) Intuitive In a very intuitive and rough sense you might see the polynomial curve as two linear curves stitched together (one rising one decreasing). For these linear curves you may remember the narrow
24,142
Variational autoencoder with Gaussian mixture model
Yes, it has been done. The following paper implements something of that form: Deep Unsupervised Clustering with Gaussian Mixture Variational Autoencoders. Nat Dilokthanakul, Pedro A.M. Mediano, Marta Garnelo, Matthew C.H. Lee, Hugh Salimbeni, Kai Arulkumaran, Murray Shanahan. They experiment with using this approach for clustering. Each Gaussian in the Gaussian mixture corresponds to a different cluster. Because the Gaussian mixture is in the latent space ($z$), and there is a neural network connecting $z$ to $x$, this allows non-trivial clusters in the input space ($x$). That paper also mentions the following blog post, which experiments with a different variation on that architecture: http://ruishu.io/2016/12/25/gmvae/ Thanks to shimao for pointing this out.
Variational autoencoder with Gaussian mixture model
Yes, it has been done. The following paper implements something of that form: Deep Unsupervised Clustering with Gaussian Mixture Variational Autoencoders. Nat Dilokthanakul, Pedro A.M. Mediano, Mart
Variational autoencoder with Gaussian mixture model Yes, it has been done. The following paper implements something of that form: Deep Unsupervised Clustering with Gaussian Mixture Variational Autoencoders. Nat Dilokthanakul, Pedro A.M. Mediano, Marta Garnelo, Matthew C.H. Lee, Hugh Salimbeni, Kai Arulkumaran, Murray Shanahan. They experiment with using this approach for clustering. Each Gaussian in the Gaussian mixture corresponds to a different cluster. Because the Gaussian mixture is in the latent space ($z$), and there is a neural network connecting $z$ to $x$, this allows non-trivial clusters in the input space ($x$). That paper also mentions the following blog post, which experiments with a different variation on that architecture: http://ruishu.io/2016/12/25/gmvae/ Thanks to shimao for pointing this out.
Variational autoencoder with Gaussian mixture model Yes, it has been done. The following paper implements something of that form: Deep Unsupervised Clustering with Gaussian Mixture Variational Autoencoders. Nat Dilokthanakul, Pedro A.M. Mediano, Mart
24,143
How does Bayesian Sufficiency relate to Frequentist Sufficiency?
If a statistic $T$ is sufficient in the frequentist way, then $p(\mathbf{x} \mid \theta, t) = p(\mathbf{x} \mid t)$, so \begin{align*} p(\theta \mid \mathbf{x}, t) &= \frac{p(\mathbf{x}\mid t,\theta)p(t \mid \theta) p(\theta)}{p(\mathbf{x}\mid t)p(t)} \\ &= \frac{p(t \mid \theta) p(\theta)}{p(t)} \tag{freq. suff.}\\ &= p(\theta \mid t). \end{align*} On the other hand, if $T$ is sufficient in the Bayesian way, then \begin{align*} p(\mathbf{x} \mid \theta, t) &= \frac{p(\mathbf{x}, \theta,t)}{p(\theta,t)}\\ &= \frac{p(\theta \mid \mathbf{x},t) p(\mathbf{x},t)}{p(\theta\mid t)p(t)}\\ &= \frac{p(\mathbf{x},t)}{p(t)} \tag{Bayesian suff.}\\ &= p(\mathbf{x} \mid t). \end{align*} Regarding "predictive sufficiency," what's that? Edit: If you have Bayesian sufficiency, you have predictive sufficiency: \begin{align*} p(\mathbf{x}' \mid \mathbf{x}) &= \int p(\mathbf{x}' \mid \theta)p(\theta \mid \mathbf{x}) d\theta\\ &= \int p(\mathbf{x}' \mid \theta) p(\theta \mid t)d\theta \tag{Bayesian suff.}\\ &= p(\mathbf{x}' \mid t). \end{align*}
How does Bayesian Sufficiency relate to Frequentist Sufficiency?
If a statistic $T$ is sufficient in the frequentist way, then $p(\mathbf{x} \mid \theta, t) = p(\mathbf{x} \mid t)$, so \begin{align*} p(\theta \mid \mathbf{x}, t) &= \frac{p(\mathbf{x}\mid t,\theta)p
How does Bayesian Sufficiency relate to Frequentist Sufficiency? If a statistic $T$ is sufficient in the frequentist way, then $p(\mathbf{x} \mid \theta, t) = p(\mathbf{x} \mid t)$, so \begin{align*} p(\theta \mid \mathbf{x}, t) &= \frac{p(\mathbf{x}\mid t,\theta)p(t \mid \theta) p(\theta)}{p(\mathbf{x}\mid t)p(t)} \\ &= \frac{p(t \mid \theta) p(\theta)}{p(t)} \tag{freq. suff.}\\ &= p(\theta \mid t). \end{align*} On the other hand, if $T$ is sufficient in the Bayesian way, then \begin{align*} p(\mathbf{x} \mid \theta, t) &= \frac{p(\mathbf{x}, \theta,t)}{p(\theta,t)}\\ &= \frac{p(\theta \mid \mathbf{x},t) p(\mathbf{x},t)}{p(\theta\mid t)p(t)}\\ &= \frac{p(\mathbf{x},t)}{p(t)} \tag{Bayesian suff.}\\ &= p(\mathbf{x} \mid t). \end{align*} Regarding "predictive sufficiency," what's that? Edit: If you have Bayesian sufficiency, you have predictive sufficiency: \begin{align*} p(\mathbf{x}' \mid \mathbf{x}) &= \int p(\mathbf{x}' \mid \theta)p(\theta \mid \mathbf{x}) d\theta\\ &= \int p(\mathbf{x}' \mid \theta) p(\theta \mid t)d\theta \tag{Bayesian suff.}\\ &= p(\mathbf{x}' \mid t). \end{align*}
How does Bayesian Sufficiency relate to Frequentist Sufficiency? If a statistic $T$ is sufficient in the frequentist way, then $p(\mathbf{x} \mid \theta, t) = p(\mathbf{x} \mid t)$, so \begin{align*} p(\theta \mid \mathbf{x}, t) &= \frac{p(\mathbf{x}\mid t,\theta)p
24,144
How does Bayesian Sufficiency relate to Frequentist Sufficiency?
We came across an interesting phenomena a few years ago, when investigating Bayesian model choice with ABC. Which I think is related with this question. There is indeed a notion of sufficiency for Bayesian model choice that does not seem particularly meaningful outside the Bayesian approach. Given two models $$\mathfrak{M}_1=\{f_\theta(\cdot); \theta\in\Theta\}$$ and $$\mathfrak{M}_2=\{g_\xi(\cdot); \xi\in\Xi\}$$ and a sample $\mathbf{x}=(x_1,\ldots,x_n)$ from one of these two models, a statistic $S$ is sufficient for model choice or across model iff the distribution of $\mathbf{X}$ conditional on $S(\mathbf{X})$ does not depend on either the model index (1 or 2) or the parameter value within the model. When such sufficient statistics exist, a Bayes factor based on $\mathbf{X}$ is the same as a Bayes factor based on $S(\mathbf{X})$. While this is a definition that is not Bayesian per se, I see no direct application outside Bayesian model choice.
How does Bayesian Sufficiency relate to Frequentist Sufficiency?
We came across an interesting phenomena a few years ago, when investigating Bayesian model choice with ABC. Which I think is related with this question. There is indeed a notion of sufficiency for Bay
How does Bayesian Sufficiency relate to Frequentist Sufficiency? We came across an interesting phenomena a few years ago, when investigating Bayesian model choice with ABC. Which I think is related with this question. There is indeed a notion of sufficiency for Bayesian model choice that does not seem particularly meaningful outside the Bayesian approach. Given two models $$\mathfrak{M}_1=\{f_\theta(\cdot); \theta\in\Theta\}$$ and $$\mathfrak{M}_2=\{g_\xi(\cdot); \xi\in\Xi\}$$ and a sample $\mathbf{x}=(x_1,\ldots,x_n)$ from one of these two models, a statistic $S$ is sufficient for model choice or across model iff the distribution of $\mathbf{X}$ conditional on $S(\mathbf{X})$ does not depend on either the model index (1 or 2) or the parameter value within the model. When such sufficient statistics exist, a Bayes factor based on $\mathbf{X}$ is the same as a Bayes factor based on $S(\mathbf{X})$. While this is a definition that is not Bayesian per se, I see no direct application outside Bayesian model choice.
How does Bayesian Sufficiency relate to Frequentist Sufficiency? We came across an interesting phenomena a few years ago, when investigating Bayesian model choice with ABC. Which I think is related with this question. There is indeed a notion of sufficiency for Bay
24,145
Understanding how to batch and feed data into a stateful LSTM
You're conflating two different things with regard to LSTM models. The batch size refers to how many input-output pairs are used in a single back-propagation pass. This is not to be confused with the window size used as your time series predictors - these are independent hyper-parameters. The normal way to solve this would be to pick a window size (let's say 25 since that was what you proposed). Now say that we use an LSTM network to predict the 26th point using the previous 25 as predictors. You would then repeat that process for each of the remaining points (27-100) using the preceding 25 points as your inputs in each case. That will yield you exactly 75 training points. Batch size will dictate how many of these points are grouped together for backprop purposes. If you picked 5, for instance, you'd get 15 training batches (75 training points divided into 5 batches). Note that this is a very small amount of data, so unless you use a very small NNet or heavy regularization, you're going to be at great risk of overfitting. You'd normally want to do a train-test split to be able to perform out-of-sample validation on the model, but given how few data points you have to work with that's going to be a bit tough.
Understanding how to batch and feed data into a stateful LSTM
You're conflating two different things with regard to LSTM models. The batch size refers to how many input-output pairs are used in a single back-propagation pass. This is not to be confused with t
Understanding how to batch and feed data into a stateful LSTM You're conflating two different things with regard to LSTM models. The batch size refers to how many input-output pairs are used in a single back-propagation pass. This is not to be confused with the window size used as your time series predictors - these are independent hyper-parameters. The normal way to solve this would be to pick a window size (let's say 25 since that was what you proposed). Now say that we use an LSTM network to predict the 26th point using the previous 25 as predictors. You would then repeat that process for each of the remaining points (27-100) using the preceding 25 points as your inputs in each case. That will yield you exactly 75 training points. Batch size will dictate how many of these points are grouped together for backprop purposes. If you picked 5, for instance, you'd get 15 training batches (75 training points divided into 5 batches). Note that this is a very small amount of data, so unless you use a very small NNet or heavy regularization, you're going to be at great risk of overfitting. You'd normally want to do a train-test split to be able to perform out-of-sample validation on the model, but given how few data points you have to work with that's going to be a bit tough.
Understanding how to batch and feed data into a stateful LSTM You're conflating two different things with regard to LSTM models. The batch size refers to how many input-output pairs are used in a single back-propagation pass. This is not to be confused with t
24,146
Converting odds ratio to percentage increase / reduction
As other answers have clearly articulated, you can't represent an odds ratio as a simple percent increase or decrease of an event happening, as this value depends on the baserate. However, if you have a meaningful baserate, you can calculate the percent success (or failure) relative to that rate. For example, if we have an odds ratio of 0.75 for the effect of an intervention and we know that the baserate for failure (failure in the control group, for example) is 20%, then the failure rate for the treatment group based on an odds ratio of 0.75 is: $$ p_{treatment} = \frac{OR \times p_{control}}{1 + OR \times p_{control} - p_{control}} = \frac{.75 \times .2}{1 + (.75 \times .2) - .2} = .158 $$ Thus, an odds ratio of .75 translates into a failure rate of 15.8% in the treatment group relative to an assumed failure rate of 20% in the control group. This translation of odds ratios into an easily understand metric is commonly used in meta-analyses of odds ratios. This simplifies if we assume a baserate of .50 to: $$ p_{treatment} = \frac{OR}{1+OR} $$
Converting odds ratio to percentage increase / reduction
As other answers have clearly articulated, you can't represent an odds ratio as a simple percent increase or decrease of an event happening, as this value depends on the baserate. However, if you have
Converting odds ratio to percentage increase / reduction As other answers have clearly articulated, you can't represent an odds ratio as a simple percent increase or decrease of an event happening, as this value depends on the baserate. However, if you have a meaningful baserate, you can calculate the percent success (or failure) relative to that rate. For example, if we have an odds ratio of 0.75 for the effect of an intervention and we know that the baserate for failure (failure in the control group, for example) is 20%, then the failure rate for the treatment group based on an odds ratio of 0.75 is: $$ p_{treatment} = \frac{OR \times p_{control}}{1 + OR \times p_{control} - p_{control}} = \frac{.75 \times .2}{1 + (.75 \times .2) - .2} = .158 $$ Thus, an odds ratio of .75 translates into a failure rate of 15.8% in the treatment group relative to an assumed failure rate of 20% in the control group. This translation of odds ratios into an easily understand metric is commonly used in meta-analyses of odds ratios. This simplifies if we assume a baserate of .50 to: $$ p_{treatment} = \frac{OR}{1+OR} $$
Converting odds ratio to percentage increase / reduction As other answers have clearly articulated, you can't represent an odds ratio as a simple percent increase or decrease of an event happening, as this value depends on the baserate. However, if you have
24,147
Converting odds ratio to percentage increase / reduction
Both the probability and the odds measure how likely it is that something happens. The odds by showing the expected number of success per failure and the probability by showing the expected number of success per trial (yes, I am a frequentist). So strictly speaking both interpretations are correct. However, the danger is that a reader might equate likely with probability, and thus misinterpret the results. Your first sentence avoids that danger.
Converting odds ratio to percentage increase / reduction
Both the probability and the odds measure how likely it is that something happens. The odds by showing the expected number of success per failure and the probability by showing the expected number of
Converting odds ratio to percentage increase / reduction Both the probability and the odds measure how likely it is that something happens. The odds by showing the expected number of success per failure and the probability by showing the expected number of success per trial (yes, I am a frequentist). So strictly speaking both interpretations are correct. However, the danger is that a reader might equate likely with probability, and thus misinterpret the results. Your first sentence avoids that danger.
Converting odds ratio to percentage increase / reduction Both the probability and the odds measure how likely it is that something happens. The odds by showing the expected number of success per failure and the probability by showing the expected number of
24,148
Converting odds ratio to percentage increase / reduction
It would be nice if statistical concepts could all be reduced to simple percentages and retain the correct information. However, this is not the case. Framing the odds ratio effect as a percentage reduction or increase completely dissociates the OR measure from an interpretation aligned with users’ expectations of analyses. In the colloquial interpretation, a 20% reduction does not correlate to, but implies a decrease from 100% to 80% or 20% reduction from some baseline. However, OR is a measure of relative reduction relatable but not intrinsically descriptive of the absolute reduction. For an interpretation anchored in the absolute, which is the interpretation we expect and observe more readily, one needs to calculate the mean probability of an event under the mean or meaningful conditions and then modify this probability with the risk factor or treatment to get the new, absolute probability. Only then can the importance of the factor be weighed and judged. Thus, your first interpretation is the safest, but neither answer is well grounded in absolute probability that the human mind operates upon, albeit tenuously.
Converting odds ratio to percentage increase / reduction
It would be nice if statistical concepts could all be reduced to simple percentages and retain the correct information. However, this is not the case. Framing the odds ratio effect as a percentage re
Converting odds ratio to percentage increase / reduction It would be nice if statistical concepts could all be reduced to simple percentages and retain the correct information. However, this is not the case. Framing the odds ratio effect as a percentage reduction or increase completely dissociates the OR measure from an interpretation aligned with users’ expectations of analyses. In the colloquial interpretation, a 20% reduction does not correlate to, but implies a decrease from 100% to 80% or 20% reduction from some baseline. However, OR is a measure of relative reduction relatable but not intrinsically descriptive of the absolute reduction. For an interpretation anchored in the absolute, which is the interpretation we expect and observe more readily, one needs to calculate the mean probability of an event under the mean or meaningful conditions and then modify this probability with the risk factor or treatment to get the new, absolute probability. Only then can the importance of the factor be weighed and judged. Thus, your first interpretation is the safest, but neither answer is well grounded in absolute probability that the human mind operates upon, albeit tenuously.
Converting odds ratio to percentage increase / reduction It would be nice if statistical concepts could all be reduced to simple percentages and retain the correct information. However, this is not the case. Framing the odds ratio effect as a percentage re
24,149
Converting odds ratio to percentage increase / reduction
You can interpret odds ratio as a conditional probabilities. So as I see your case, the odds ratio of event $Y$ to happen is $P(Y=True|X=x)= 0.8$. If I understand you correctly, this seems to be the case for all $x \in X$ since you say that after a 1-point increase in $X$ the probability of $Y$ happening is still $0.8$. But that would be the notion of statistical independence, so $P(Y|X)=P(Y)=0.8$ which means the odds of your event $Y$ happening is $0.8$ for all cases and so, both of your statements are false.
Converting odds ratio to percentage increase / reduction
You can interpret odds ratio as a conditional probabilities. So as I see your case, the odds ratio of event $Y$ to happen is $P(Y=True|X=x)= 0.8$. If I understand you correctly, this seems to be the c
Converting odds ratio to percentage increase / reduction You can interpret odds ratio as a conditional probabilities. So as I see your case, the odds ratio of event $Y$ to happen is $P(Y=True|X=x)= 0.8$. If I understand you correctly, this seems to be the case for all $x \in X$ since you say that after a 1-point increase in $X$ the probability of $Y$ happening is still $0.8$. But that would be the notion of statistical independence, so $P(Y|X)=P(Y)=0.8$ which means the odds of your event $Y$ happening is $0.8$ for all cases and so, both of your statements are false.
Converting odds ratio to percentage increase / reduction You can interpret odds ratio as a conditional probabilities. So as I see your case, the odds ratio of event $Y$ to happen is $P(Y=True|X=x)= 0.8$. If I understand you correctly, this seems to be the c
24,150
Converting odds ratio to percentage increase / reduction
Yes, you are right in both of your observations. with one unit increase in X, the odds ratio of event Y happening is 0.80 Does that mean the same as 'For every 1-point increase in X, odds of event Y happening is reduced by 20%'? Odds ratio (for one unit increase in X) = (Odds in favor of Y at X = x + 1) / (Odds in favor of Y at X = x) 0.8 = (Odds in favor of Y at X = x + 1) / (Odds in favor of Y at X = x) 0.8*(Odds in favor of Y at X = x) = (Odds in favor of Y at X = x + 1) Now, change in odds (increase/reduction) = (Odds in favor of Y at X = x + 1) - (Odds in favor of Y at X = x) **change in odds** = 0.8*(Odds in favor of Y at X = x) - (Odds in favor of Y at X = x) = - 0.2 (Odds in favor of Y at X = x) = **20% reduction** *In short, (Odds Ratio - 1)100 gives the percentage change in Odds Whereas 'For every 1-point increase in X, event Y is 20% less likely to happen' is an incorrect interpretation of odds ratio (that's interpreting it as relative risk), am I correct? Yes, that's an incorrect statement as odds are different from probabilities. In the same fashion, the odds ratio is different for relative risk.
Converting odds ratio to percentage increase / reduction
Yes, you are right in both of your observations. with one unit increase in X, the odds ratio of event Y happening is 0.80 Does that mean the same as 'For every 1-point increase in X, odds of event Y h
Converting odds ratio to percentage increase / reduction Yes, you are right in both of your observations. with one unit increase in X, the odds ratio of event Y happening is 0.80 Does that mean the same as 'For every 1-point increase in X, odds of event Y happening is reduced by 20%'? Odds ratio (for one unit increase in X) = (Odds in favor of Y at X = x + 1) / (Odds in favor of Y at X = x) 0.8 = (Odds in favor of Y at X = x + 1) / (Odds in favor of Y at X = x) 0.8*(Odds in favor of Y at X = x) = (Odds in favor of Y at X = x + 1) Now, change in odds (increase/reduction) = (Odds in favor of Y at X = x + 1) - (Odds in favor of Y at X = x) **change in odds** = 0.8*(Odds in favor of Y at X = x) - (Odds in favor of Y at X = x) = - 0.2 (Odds in favor of Y at X = x) = **20% reduction** *In short, (Odds Ratio - 1)100 gives the percentage change in Odds Whereas 'For every 1-point increase in X, event Y is 20% less likely to happen' is an incorrect interpretation of odds ratio (that's interpreting it as relative risk), am I correct? Yes, that's an incorrect statement as odds are different from probabilities. In the same fashion, the odds ratio is different for relative risk.
Converting odds ratio to percentage increase / reduction Yes, you are right in both of your observations. with one unit increase in X, the odds ratio of event Y happening is 0.80 Does that mean the same as 'For every 1-point increase in X, odds of event Y h
24,151
What is the weight decay loss?
Weight decay specifies regularization in the neural network. During training, a regularization term is added to the network's loss to compute the backpropagation gradient. The weight decay value determines how dominant this regularization term will be in the gradient computation. As a rule of thumb, the more training examples you have, the weaker this term should be. The more parameters you have the higher this term should be. So, Weight decay is a regularization term that penalizes big weights. When the weight decay coefficient is big, the penalty for big weights is also big, when it is small weights can freely grow. So, now if you go back to reading the answer which you linked in your question, it would make complete sense now.
What is the weight decay loss?
Weight decay specifies regularization in the neural network. During training, a regularization term is added to the network's loss to compute the backpropagation gradient. The weight decay value deter
What is the weight decay loss? Weight decay specifies regularization in the neural network. During training, a regularization term is added to the network's loss to compute the backpropagation gradient. The weight decay value determines how dominant this regularization term will be in the gradient computation. As a rule of thumb, the more training examples you have, the weaker this term should be. The more parameters you have the higher this term should be. So, Weight decay is a regularization term that penalizes big weights. When the weight decay coefficient is big, the penalty for big weights is also big, when it is small weights can freely grow. So, now if you go back to reading the answer which you linked in your question, it would make complete sense now.
What is the weight decay loss? Weight decay specifies regularization in the neural network. During training, a regularization term is added to the network's loss to compute the backpropagation gradient. The weight decay value deter
24,152
Shortcut connections in ResNet with different spatial sizes
I happened to read this paper recently. This paper introduced a shortcut projection to match the dimension. You can see equation 2 in the paper. Three different projection options A, B, C is compared in page 6. There is a tensorflow implementation of residual net. You can find implementation of shortcut projection in real code. Good luck!
Shortcut connections in ResNet with different spatial sizes
I happened to read this paper recently. This paper introduced a shortcut projection to match the dimension. You can see equation 2 in the paper. Three different projection options A, B, C is compared
Shortcut connections in ResNet with different spatial sizes I happened to read this paper recently. This paper introduced a shortcut projection to match the dimension. You can see equation 2 in the paper. Three different projection options A, B, C is compared in page 6. There is a tensorflow implementation of residual net. You can find implementation of shortcut projection in real code. Good luck!
Shortcut connections in ResNet with different spatial sizes I happened to read this paper recently. This paper introduced a shortcut projection to match the dimension. You can see equation 2 in the paper. Three different projection options A, B, C is compared
24,153
Shortcut connections in ResNet with different spatial sizes
The default way of solving this is to a 1x1 conv with a stride of 2, followed by a batch norm. Yes, half of the pixels will be ignored. See the implemention in pytorch (from FAIR where the authors work): link
Shortcut connections in ResNet with different spatial sizes
The default way of solving this is to a 1x1 conv with a stride of 2, followed by a batch norm. Yes, half of the pixels will be ignored. See the implemention in pytorch (from FAIR where the authors wor
Shortcut connections in ResNet with different spatial sizes The default way of solving this is to a 1x1 conv with a stride of 2, followed by a batch norm. Yes, half of the pixels will be ignored. See the implemention in pytorch (from FAIR where the authors work): link
Shortcut connections in ResNet with different spatial sizes The default way of solving this is to a 1x1 conv with a stride of 2, followed by a batch norm. Yes, half of the pixels will be ignored. See the implemention in pytorch (from FAIR where the authors wor
24,154
Shortcut connections in ResNet with different spatial sizes
I'm running into the same issue. It looks like there are two ways to solve this: "When the input and output dimensions don’t match up, we add a convolutional layer in the shortcut path. The arrangement is called convolutional block" https://engmrk.com/residual-networks-resnets/
Shortcut connections in ResNet with different spatial sizes
I'm running into the same issue. It looks like there are two ways to solve this: "When the input and output dimensions don’t match up, we add a convolutional layer in the shortcut path. The arrangemen
Shortcut connections in ResNet with different spatial sizes I'm running into the same issue. It looks like there are two ways to solve this: "When the input and output dimensions don’t match up, we add a convolutional layer in the shortcut path. The arrangement is called convolutional block" https://engmrk.com/residual-networks-resnets/
Shortcut connections in ResNet with different spatial sizes I'm running into the same issue. It looks like there are two ways to solve this: "When the input and output dimensions don’t match up, we add a convolutional layer in the shortcut path. The arrangemen
24,155
Visualizing of $\sigma$-algebras as "information"
$\mathscr{G}$ is our information in the sense that for all $A \in \mathscr{G}$, we know whether $\omega \in A$. Let us use the Tickets in a box metaphor, extended to handle $\sigma$-algebras so that the ticket mentions for all $A\in \mathscr{F}$ whether the outcome represented by the ticket belongs to $A$. Now, say that someone else picks the ticket and we don't see it. For any $A \in \mathscr{G}$ we may ask whether the ticket says that the outcome is in $A$ and the person holding the ticket tells us. However, if we ask about some $A \in \mathscr{F} \setminus \mathscr{G}$, we hear "Sorry, you don't know that". Larger $\sigma$-algebra is more information This also explains why moving to $\mathscr{G}' \supset \mathscr{G}$ means gaining new information -- now we still get answers to $[X \in A?]$-questions about any$A \in \mathscr{G}$ and additionally to some new questions -- those where $A \in \mathscr{G'} \setminus \mathscr{G}$. Random variables So, the tickets also contains the values of random variables. If the random variable $X$ is $\mathscr{G}$-measurable, we get answers to all our questions about its value, such as $[$is $X$ equal to $3]$, since by $\mathscr{G}$-measurability of $X$, $\{\omega \mid X(\omega)=3\}\in\mathscr{G}$. Or, to handle the delicacies of the uncountable case, we may also ask $[$Is $X$ in the set $B]$? (Since for any particular value we think about, the probability of hearing "yes" may be $0$ and that would be boring). So, in this sense we have all information about the realization of the random variable, if our information is $\mathscr{G}$ and the RV is $\mathscr{G}$-measurable. Caveat: the definition of measurability of random variables restricts the sets $B$ we may ask about. $[$Is $X(\omega) \in B]$ is answered if $B$ is a measurable set in the value space of the random variable (usually Borel $\sigma$-algebra is assumed with $\mathbf{R}$ without mentioning). So, in the uncountable (nondiscrete $X$) case, don't ask whether $X$ is in the Vitali set or the oracle holding the ticket shall be mad. Reference I did not cite any reference in the answer but I consulted J. Jacod and P.E. Protter. Probability essentials (2nd edition), Springer, 2004 about the definition of measurability of random variables. (And have learned these things from the same book previously, if I recall correctly).
Visualizing of $\sigma$-algebras as "information"
$\mathscr{G}$ is our information in the sense that for all $A \in \mathscr{G}$, we know whether $\omega \in A$. Let us use the Tickets in a box metaphor, extended to handle $\sigma$-algebras so that
Visualizing of $\sigma$-algebras as "information" $\mathscr{G}$ is our information in the sense that for all $A \in \mathscr{G}$, we know whether $\omega \in A$. Let us use the Tickets in a box metaphor, extended to handle $\sigma$-algebras so that the ticket mentions for all $A\in \mathscr{F}$ whether the outcome represented by the ticket belongs to $A$. Now, say that someone else picks the ticket and we don't see it. For any $A \in \mathscr{G}$ we may ask whether the ticket says that the outcome is in $A$ and the person holding the ticket tells us. However, if we ask about some $A \in \mathscr{F} \setminus \mathscr{G}$, we hear "Sorry, you don't know that". Larger $\sigma$-algebra is more information This also explains why moving to $\mathscr{G}' \supset \mathscr{G}$ means gaining new information -- now we still get answers to $[X \in A?]$-questions about any$A \in \mathscr{G}$ and additionally to some new questions -- those where $A \in \mathscr{G'} \setminus \mathscr{G}$. Random variables So, the tickets also contains the values of random variables. If the random variable $X$ is $\mathscr{G}$-measurable, we get answers to all our questions about its value, such as $[$is $X$ equal to $3]$, since by $\mathscr{G}$-measurability of $X$, $\{\omega \mid X(\omega)=3\}\in\mathscr{G}$. Or, to handle the delicacies of the uncountable case, we may also ask $[$Is $X$ in the set $B]$? (Since for any particular value we think about, the probability of hearing "yes" may be $0$ and that would be boring). So, in this sense we have all information about the realization of the random variable, if our information is $\mathscr{G}$ and the RV is $\mathscr{G}$-measurable. Caveat: the definition of measurability of random variables restricts the sets $B$ we may ask about. $[$Is $X(\omega) \in B]$ is answered if $B$ is a measurable set in the value space of the random variable (usually Borel $\sigma$-algebra is assumed with $\mathbf{R}$ without mentioning). So, in the uncountable (nondiscrete $X$) case, don't ask whether $X$ is in the Vitali set or the oracle holding the ticket shall be mad. Reference I did not cite any reference in the answer but I consulted J. Jacod and P.E. Protter. Probability essentials (2nd edition), Springer, 2004 about the definition of measurability of random variables. (And have learned these things from the same book previously, if I recall correctly).
Visualizing of $\sigma$-algebras as "information" $\mathscr{G}$ is our information in the sense that for all $A \in \mathscr{G}$, we know whether $\omega \in A$. Let us use the Tickets in a box metaphor, extended to handle $\sigma$-algebras so that
24,156
Do neural networks usually take a while to "kick in" during training?
The fact that the algorithm took a while to "kick-in" is not particularly surprising. In general, the target function to be optimized behind neural networks are highly multi-modal. As such, unless you have some sort of clever set of initial values for your problem, there is no reason to believe that you will be starting on a steep descent. As such, your optimization algorithm will be almost randomly wandering until it finds a fairly steep valley to begin descending down onto. Once this has been found, you should expect most gradient-based algorithms to immediately begin narrowing into the particular mode that it is closest to.
Do neural networks usually take a while to "kick in" during training?
The fact that the algorithm took a while to "kick-in" is not particularly surprising. In general, the target function to be optimized behind neural networks are highly multi-modal. As such, unless yo
Do neural networks usually take a while to "kick in" during training? The fact that the algorithm took a while to "kick-in" is not particularly surprising. In general, the target function to be optimized behind neural networks are highly multi-modal. As such, unless you have some sort of clever set of initial values for your problem, there is no reason to believe that you will be starting on a steep descent. As such, your optimization algorithm will be almost randomly wandering until it finds a fairly steep valley to begin descending down onto. Once this has been found, you should expect most gradient-based algorithms to immediately begin narrowing into the particular mode that it is closest to.
Do neural networks usually take a while to "kick in" during training? The fact that the algorithm took a while to "kick-in" is not particularly surprising. In general, the target function to be optimized behind neural networks are highly multi-modal. As such, unless yo
24,157
Do I apply normalization per entire dataset, per input vector or per feature?
Rather than thinking of the problem in abstract terms, imagine a real-life example. You want to predict job satisfaction (1-5) based on features such as age (in years), work experience (in years), monthly salary (thousands of $), and gender (0 or 1). What would be the point of subtracting the global mean (highly influenced by monthly earnings) of gender or the number of years of work experience? Global or row-wise normalization does not make any sense in most cases. On the other hand, in the column-wise case, you end up with each of the columns having a mean of zero and a standard deviation of one -- each of the features is centered at zero and equally scaled.
Do I apply normalization per entire dataset, per input vector or per feature?
Rather than thinking of the problem in abstract terms, imagine a real-life example. You want to predict job satisfaction (1-5) based on features such as age (in years), work experience (in years), mon
Do I apply normalization per entire dataset, per input vector or per feature? Rather than thinking of the problem in abstract terms, imagine a real-life example. You want to predict job satisfaction (1-5) based on features such as age (in years), work experience (in years), monthly salary (thousands of $), and gender (0 or 1). What would be the point of subtracting the global mean (highly influenced by monthly earnings) of gender or the number of years of work experience? Global or row-wise normalization does not make any sense in most cases. On the other hand, in the column-wise case, you end up with each of the columns having a mean of zero and a standard deviation of one -- each of the features is centered at zero and equally scaled.
Do I apply normalization per entire dataset, per input vector or per feature? Rather than thinking of the problem in abstract terms, imagine a real-life example. You want to predict job satisfaction (1-5) based on features such as age (in years), work experience (in years), mon
24,158
KL divergence and expectations
Expected value is a quantity that can be computed for any function of the outcomes. Let $\Omega$ be the space of all possible outcomes and let $q:\Omega \rightarrow \mathbb{R}$ be a probability distribution defined on $\Omega$. For any function $f:\Omega \rightarrow S$ where $S$ is an arbitrary set that is closed under addition and scalar multiplication (e.g. $S = \mathbb{R}$) we can compute the expected value of $f$ under distribution $q$ as follows: $$ \mathbb{E}[f] = \mathbb{E}_{x \sim q}[f(x)] = \sum_{x \in \Omega} q(x) f(x) $$ In the KL-divergence, we have that $f(x) = \ln{\frac{q(x)}{p(x)}}$ for some fixed $p(x)$.
KL divergence and expectations
Expected value is a quantity that can be computed for any function of the outcomes. Let $\Omega$ be the space of all possible outcomes and let $q:\Omega \rightarrow \mathbb{R}$ be a probability distri
KL divergence and expectations Expected value is a quantity that can be computed for any function of the outcomes. Let $\Omega$ be the space of all possible outcomes and let $q:\Omega \rightarrow \mathbb{R}$ be a probability distribution defined on $\Omega$. For any function $f:\Omega \rightarrow S$ where $S$ is an arbitrary set that is closed under addition and scalar multiplication (e.g. $S = \mathbb{R}$) we can compute the expected value of $f$ under distribution $q$ as follows: $$ \mathbb{E}[f] = \mathbb{E}_{x \sim q}[f(x)] = \sum_{x \in \Omega} q(x) f(x) $$ In the KL-divergence, we have that $f(x) = \ln{\frac{q(x)}{p(x)}}$ for some fixed $p(x)$.
KL divergence and expectations Expected value is a quantity that can be computed for any function of the outcomes. Let $\Omega$ be the space of all possible outcomes and let $q:\Omega \rightarrow \mathbb{R}$ be a probability distri
24,159
Understanding linear projection in "The Elements of Statistical Learning"
Including the constant 1 in the input vector is a common trick to include a bias (think about Y-intercept) but keeping all the terms of the expression symmetrical: you can write $\beta X$ instead of $\beta_0 + \beta X$ everywhere. If you do this, it is then correct that the hyperplane $Y = \beta X$ includes the origin, since the origin is a vector of $0$ values and multiplying it for $\beta$ gives the value $0$. However, your input vectors will always have the first element equal to $1$; therefore they will never contain the origin, and will be place on an smaller hyperplane, which has one less dimension. You can visualize this by thinking of a line $Y=mx+q$ on your sheet of paper (2 dimensions). The corresponding hyperplane if you include the bias $q$ your vector becomes $X = [x, x_0=1]$ and your coefficients $\beta = [m, q]$. In 3 dimensions this is a plane passing from the origin, that intercepts the plane $x_0=1$ producing the line where your inputs can be placed.
Understanding linear projection in "The Elements of Statistical Learning"
Including the constant 1 in the input vector is a common trick to include a bias (think about Y-intercept) but keeping all the terms of the expression symmetrical: you can write $\beta X$ instead of $
Understanding linear projection in "The Elements of Statistical Learning" Including the constant 1 in the input vector is a common trick to include a bias (think about Y-intercept) but keeping all the terms of the expression symmetrical: you can write $\beta X$ instead of $\beta_0 + \beta X$ everywhere. If you do this, it is then correct that the hyperplane $Y = \beta X$ includes the origin, since the origin is a vector of $0$ values and multiplying it for $\beta$ gives the value $0$. However, your input vectors will always have the first element equal to $1$; therefore they will never contain the origin, and will be place on an smaller hyperplane, which has one less dimension. You can visualize this by thinking of a line $Y=mx+q$ on your sheet of paper (2 dimensions). The corresponding hyperplane if you include the bias $q$ your vector becomes $X = [x, x_0=1]$ and your coefficients $\beta = [m, q]$. In 3 dimensions this is a plane passing from the origin, that intercepts the plane $x_0=1$ producing the line where your inputs can be placed.
Understanding linear projection in "The Elements of Statistical Learning" Including the constant 1 in the input vector is a common trick to include a bias (think about Y-intercept) but keeping all the terms of the expression symmetrical: you can write $\beta X$ instead of $
24,160
Understanding linear projection in "The Elements of Statistical Learning"
To help you understand this I made a visualisation of a very simple case. Let's say we have a one dimensional problem (p=1) so a single feature (input variable) $X_1$ to predict a single output variable $Y$. Let's imagine that we already found an intercept $\beta_0 = 5$ and a coefficient $\beta_1 = 2$ for our input variable $X_1$. Our linear model would look like: $\hat{Y} = \beta_0 + \beta_1 \times X_1$. Hence the obvious representation would be an hyperplane (a line) in (p+1)-dimensional space in this case (2d): Another representation would be to add another variable $X_0$ which will lead to the following equation: $\hat{Y} = \beta_0 \times X_0 + \beta_1 \times X_1$. In practice we know that $X_0$ will be a constant and equal to 1 but let's assume it is not fixed yet. In that case, we can now plot a 3d graph with an hyperplane as follow: Finally since we know only $X_0 = 1$ is possible I highlighted with a red dashed line the only working projection out of this hyperplane which correspond exactly to the plot we had before.
Understanding linear projection in "The Elements of Statistical Learning"
To help you understand this I made a visualisation of a very simple case. Let's say we have a one dimensional problem (p=1) so a single feature (input variable) $X_1$ to predict a single output variab
Understanding linear projection in "The Elements of Statistical Learning" To help you understand this I made a visualisation of a very simple case. Let's say we have a one dimensional problem (p=1) so a single feature (input variable) $X_1$ to predict a single output variable $Y$. Let's imagine that we already found an intercept $\beta_0 = 5$ and a coefficient $\beta_1 = 2$ for our input variable $X_1$. Our linear model would look like: $\hat{Y} = \beta_0 + \beta_1 \times X_1$. Hence the obvious representation would be an hyperplane (a line) in (p+1)-dimensional space in this case (2d): Another representation would be to add another variable $X_0$ which will lead to the following equation: $\hat{Y} = \beta_0 \times X_0 + \beta_1 \times X_1$. In practice we know that $X_0$ will be a constant and equal to 1 but let's assume it is not fixed yet. In that case, we can now plot a 3d graph with an hyperplane as follow: Finally since we know only $X_0 = 1$ is possible I highlighted with a red dashed line the only working projection out of this hyperplane which correspond exactly to the plot we had before.
Understanding linear projection in "The Elements of Statistical Learning" To help you understand this I made a visualisation of a very simple case. Let's say we have a one dimensional problem (p=1) so a single feature (input variable) $X_1$ to predict a single output variab
24,161
Understanding linear projection in "The Elements of Statistical Learning"
I believe that both of the answers here are incorrect, because the textbook itself is incorrect, so they're trying to justify an incorrect concept. See this answer by the user Jean-Claude Arbaut.
Understanding linear projection in "The Elements of Statistical Learning"
I believe that both of the answers here are incorrect, because the textbook itself is incorrect, so they're trying to justify an incorrect concept. See this answer by the user Jean-Claude Arbaut.
Understanding linear projection in "The Elements of Statistical Learning" I believe that both of the answers here are incorrect, because the textbook itself is incorrect, so they're trying to justify an incorrect concept. See this answer by the user Jean-Claude Arbaut.
Understanding linear projection in "The Elements of Statistical Learning" I believe that both of the answers here are incorrect, because the textbook itself is incorrect, so they're trying to justify an incorrect concept. See this answer by the user Jean-Claude Arbaut.
24,162
Why is VC dimension important?
What is the VC dimension As mentioned by @CPerkins the VC dimension is a measure of a model's complexity. It also can be defined with regard to the ability to shatter datapoints like, as you mentioned, wikipedia does. The basic problem We want a model (e.g. some classifier) that generalizes well on unseen data. We are limited to a specific amount of sample data. The following image (taken from here) shows some Models ($\mathcal{S_1}$ up to $\mathcal{S_k}$) of differing complexity (VC dimension), here shown on the x-axis and called $h$. The images shows that a higher VC dimension allows for a lower empirical risk (the error a model makes on the sample data), but also introduces a higher confidence interval. This interval can be seen as the confidence in the model's ability to generalize. Low VC dimension (high bias) If we use a model of low complexity, we introduce some kind of assumption (bias) regarding the dataset e.g. when using a linear classifier we assume the data can be described with a linear model. If this is not the case, our given problem can not be solved by a linear model, for example because the problem is of nonlinear nature. We will end up with a bad performing model which will not be able to learn the data's structure. We should therefore try to avoid introducing a strong bias. High VC dimension (greater confidence interval) On the other side of the x-axis we see models of higher complexity which might be of such a great capacity that it will rather memorize the data instead of learning it's general underlying structure i.e. the model overfits. After realizing this problem it seems that we should avoid complex models. This may seem controversial as we shall not introduce a bias i.e. have low VC dimension but should also do not have high VC dimension. This problem has deep roots in statistical learning theory and is known as the bias-variance-tradeoff. What we should do in this situation is to be as complex as necessary and as simplistic as possible, so when comparing two models which end up with the same empirical error, we should use the less complex one. I hope I could show you that there is more behind the idea of VC dimension.
Why is VC dimension important?
What is the VC dimension As mentioned by @CPerkins the VC dimension is a measure of a model's complexity. It also can be defined with regard to the ability to shatter datapoints like, as you mentioned
Why is VC dimension important? What is the VC dimension As mentioned by @CPerkins the VC dimension is a measure of a model's complexity. It also can be defined with regard to the ability to shatter datapoints like, as you mentioned, wikipedia does. The basic problem We want a model (e.g. some classifier) that generalizes well on unseen data. We are limited to a specific amount of sample data. The following image (taken from here) shows some Models ($\mathcal{S_1}$ up to $\mathcal{S_k}$) of differing complexity (VC dimension), here shown on the x-axis and called $h$. The images shows that a higher VC dimension allows for a lower empirical risk (the error a model makes on the sample data), but also introduces a higher confidence interval. This interval can be seen as the confidence in the model's ability to generalize. Low VC dimension (high bias) If we use a model of low complexity, we introduce some kind of assumption (bias) regarding the dataset e.g. when using a linear classifier we assume the data can be described with a linear model. If this is not the case, our given problem can not be solved by a linear model, for example because the problem is of nonlinear nature. We will end up with a bad performing model which will not be able to learn the data's structure. We should therefore try to avoid introducing a strong bias. High VC dimension (greater confidence interval) On the other side of the x-axis we see models of higher complexity which might be of such a great capacity that it will rather memorize the data instead of learning it's general underlying structure i.e. the model overfits. After realizing this problem it seems that we should avoid complex models. This may seem controversial as we shall not introduce a bias i.e. have low VC dimension but should also do not have high VC dimension. This problem has deep roots in statistical learning theory and is known as the bias-variance-tradeoff. What we should do in this situation is to be as complex as necessary and as simplistic as possible, so when comparing two models which end up with the same empirical error, we should use the less complex one. I hope I could show you that there is more behind the idea of VC dimension.
Why is VC dimension important? What is the VC dimension As mentioned by @CPerkins the VC dimension is a measure of a model's complexity. It also can be defined with regard to the ability to shatter datapoints like, as you mentioned
24,163
Why is VC dimension important?
VC dimension is the number of bits of information (samples) one needs in order to find a specific object (function) among a set of $N$ objects (functions). $VC$ dimension comes from a similar concept in the information theory. The information theory started from the Shannon's observation of the following: If you have $N$ objects and among these $N$ objects you're looking for a specific one. How many bits of information do you need to find this object? You can split your set of objects into two halfs and ask "In what half the object that I'm looking for is located?". You receive "yes" if it is in the first half or "no", if it is in the second half. In other words, you receive 1 bit of information. After that, you ask the same question and split your set again and again, until you finally find your desired object. How many bits of information do you need (yes/no answers)? It's clearly $log_2(N)$ bits of information - similarly to binary search problem with the sorted array. Vapnik and Chernovenkis asked a similar question in pattern recognition problem. Suppose you have a set of $N$ functions s.t. given input $x$, each function outputs yes or no (supervised binary classification problem) and among these $N$ functions you are looking for a specific function, that gives you correct results yes/no for a given dataset $D=\{(x_1,y_1), (x_2, y_2), ..., (x_l, y_l)\}$. You can ask the question: "Which functions do return no and which functions do return yes for a given $x_i$ from your dataset. Since you know what the real answer is from the training data you have, you can throw away all the functions that give you wrong answer for some $x_i$. How many bits of information do you need? Or in other words: How many training examples do you need to remove all those wrong functions?. Here it is a small difference from the Shannon's observation in information theory. You aren't splitting your set of functions to exactly half (maybe just only one function out of $N$ gives you incorrect answer for some $x_i$), and maybe, your set of functions is very big and it's sufficient for you find a function that is $\epsilon$-close to your desired function and you want to be sure that this function is $\epsilon$-close with probability $1-\delta$ ($(\epsilon, \delta)$-PAC framework), the number of bits of information (number of samples) you need will be $\frac{log_2N/\delta}{\epsilon}$. Suppose now that among the set of $N$ functions there is no function that does not commit errors. As previously, it is enough for you to find a function that is $\epsilon$-close with probability $1-\delta$. The number of samples you would need is $\frac{log_2N/\delta}{\epsilon^2}$. Note that results in both cases are proportional to $log_2N$ - similar to the binary search problem. Now suppose that you have an infinite set of functions and among those functions you want to find the function that is $\epsilon$-close to the best function with probability $1-\delta$. Suppose (for simplicity of illustration) that the functions are affine continuous (SVM) and you have found a function that is $\epsilon$-close to the best function. If you would move your function a little bit s.t. it won't change the results of the classification you would have a different function that classifies with the same results as the first one. You can take all such function that give you the same classification results (classification error) and count them as a single function because they classify your data with the exact same loss (a line in the picture). ___________________Both lines (function) will classify the points with the same success___________________ How many samples do you need to find a specific function from a set of sets of such functions (recall that we had devided our functions to the sets of functions where each function gives the same classification results for a given set of points)? This is what the $VC$ dimension tells - $log_2N$ is replaced by $VC$ because you have an infinite number of continuous functions that are divided to a sets of functions with the same classification error for specific points. The number of samples you would need is $\frac{VC -log(\delta)}{\epsilon}$ if you have a function that recognizes perfectly and $\frac{VC - log(\delta)}{\epsilon^2}$ if you don't have a perfect function in your original set of functions. That is, $VC$ dimension gives you an upper bound (that cannot be improved btw) for a number of samples you need in order to achieve $\epsilon$ error with probability $1-\delta$.
Why is VC dimension important?
VC dimension is the number of bits of information (samples) one needs in order to find a specific object (function) among a set of $N$ objects (functions). $VC$ dimension comes from a similar concept
Why is VC dimension important? VC dimension is the number of bits of information (samples) one needs in order to find a specific object (function) among a set of $N$ objects (functions). $VC$ dimension comes from a similar concept in the information theory. The information theory started from the Shannon's observation of the following: If you have $N$ objects and among these $N$ objects you're looking for a specific one. How many bits of information do you need to find this object? You can split your set of objects into two halfs and ask "In what half the object that I'm looking for is located?". You receive "yes" if it is in the first half or "no", if it is in the second half. In other words, you receive 1 bit of information. After that, you ask the same question and split your set again and again, until you finally find your desired object. How many bits of information do you need (yes/no answers)? It's clearly $log_2(N)$ bits of information - similarly to binary search problem with the sorted array. Vapnik and Chernovenkis asked a similar question in pattern recognition problem. Suppose you have a set of $N$ functions s.t. given input $x$, each function outputs yes or no (supervised binary classification problem) and among these $N$ functions you are looking for a specific function, that gives you correct results yes/no for a given dataset $D=\{(x_1,y_1), (x_2, y_2), ..., (x_l, y_l)\}$. You can ask the question: "Which functions do return no and which functions do return yes for a given $x_i$ from your dataset. Since you know what the real answer is from the training data you have, you can throw away all the functions that give you wrong answer for some $x_i$. How many bits of information do you need? Or in other words: How many training examples do you need to remove all those wrong functions?. Here it is a small difference from the Shannon's observation in information theory. You aren't splitting your set of functions to exactly half (maybe just only one function out of $N$ gives you incorrect answer for some $x_i$), and maybe, your set of functions is very big and it's sufficient for you find a function that is $\epsilon$-close to your desired function and you want to be sure that this function is $\epsilon$-close with probability $1-\delta$ ($(\epsilon, \delta)$-PAC framework), the number of bits of information (number of samples) you need will be $\frac{log_2N/\delta}{\epsilon}$. Suppose now that among the set of $N$ functions there is no function that does not commit errors. As previously, it is enough for you to find a function that is $\epsilon$-close with probability $1-\delta$. The number of samples you would need is $\frac{log_2N/\delta}{\epsilon^2}$. Note that results in both cases are proportional to $log_2N$ - similar to the binary search problem. Now suppose that you have an infinite set of functions and among those functions you want to find the function that is $\epsilon$-close to the best function with probability $1-\delta$. Suppose (for simplicity of illustration) that the functions are affine continuous (SVM) and you have found a function that is $\epsilon$-close to the best function. If you would move your function a little bit s.t. it won't change the results of the classification you would have a different function that classifies with the same results as the first one. You can take all such function that give you the same classification results (classification error) and count them as a single function because they classify your data with the exact same loss (a line in the picture). ___________________Both lines (function) will classify the points with the same success___________________ How many samples do you need to find a specific function from a set of sets of such functions (recall that we had devided our functions to the sets of functions where each function gives the same classification results for a given set of points)? This is what the $VC$ dimension tells - $log_2N$ is replaced by $VC$ because you have an infinite number of continuous functions that are divided to a sets of functions with the same classification error for specific points. The number of samples you would need is $\frac{VC -log(\delta)}{\epsilon}$ if you have a function that recognizes perfectly and $\frac{VC - log(\delta)}{\epsilon^2}$ if you don't have a perfect function in your original set of functions. That is, $VC$ dimension gives you an upper bound (that cannot be improved btw) for a number of samples you need in order to achieve $\epsilon$ error with probability $1-\delta$.
Why is VC dimension important? VC dimension is the number of bits of information (samples) one needs in order to find a specific object (function) among a set of $N$ objects (functions). $VC$ dimension comes from a similar concept
24,164
Why is VC dimension important?
The VC dimension is a measure of the complexity of the model. For example, given the VC dimension Dvc, a good rule of thumb is that you should have n = 10xDvc data points given the complexity of your model. You can also use it to create an upper bound on the test error.
Why is VC dimension important?
The VC dimension is a measure of the complexity of the model. For example, given the VC dimension Dvc, a good rule of thumb is that you should have n = 10xDvc data points given the complexity of your
Why is VC dimension important? The VC dimension is a measure of the complexity of the model. For example, given the VC dimension Dvc, a good rule of thumb is that you should have n = 10xDvc data points given the complexity of your model. You can also use it to create an upper bound on the test error.
Why is VC dimension important? The VC dimension is a measure of the complexity of the model. For example, given the VC dimension Dvc, a good rule of thumb is that you should have n = 10xDvc data points given the complexity of your
24,165
Expected value of $R^2$, the coefficient of determination, under the null hypothesis
This is accurate mathematical statistics. See this post for the derivation of the distribution of $R^2$ under the hypothesis that all regressors (bar the constant term) are uncorrelated with the dependent variable ("random predictors"). This distribution is a Beta, with $m$ being the number of predictors without counting the constant term, and $n$ the sample size, $$R^2 \sim Beta\left (\frac {m}{2}, \frac {n-m-1}{2}\right)$$ and so $$E(R^2) = \frac {m/2}{(m/2)+[(n-m-1)/2]} = \frac{m}{n-1}$$ This appears to be a clever way to "justify" the logic behind the adjusted $R^2$: if indeed all regressors are uncorrelated, then the adjusted $R^2$ is "on average" zero.
Expected value of $R^2$, the coefficient of determination, under the null hypothesis
This is accurate mathematical statistics. See this post for the derivation of the distribution of $R^2$ under the hypothesis that all regressors (bar the constant term) are uncorrelated with the depen
Expected value of $R^2$, the coefficient of determination, under the null hypothesis This is accurate mathematical statistics. See this post for the derivation of the distribution of $R^2$ under the hypothesis that all regressors (bar the constant term) are uncorrelated with the dependent variable ("random predictors"). This distribution is a Beta, with $m$ being the number of predictors without counting the constant term, and $n$ the sample size, $$R^2 \sim Beta\left (\frac {m}{2}, \frac {n-m-1}{2}\right)$$ and so $$E(R^2) = \frac {m/2}{(m/2)+[(n-m-1)/2]} = \frac{m}{n-1}$$ This appears to be a clever way to "justify" the logic behind the adjusted $R^2$: if indeed all regressors are uncorrelated, then the adjusted $R^2$ is "on average" zero.
Expected value of $R^2$, the coefficient of determination, under the null hypothesis This is accurate mathematical statistics. See this post for the derivation of the distribution of $R^2$ under the hypothesis that all regressors (bar the constant term) are uncorrelated with the depen
24,166
What does it mean (intuitively) to hold other variables constant in regression?
Intuition is a tricky subject, it depends on the person's background. For instance, I studied statistics after studying mathematical physics. For me the intuition is in partial derivatives. Consider a regression model $$y_i=a+b_x x_i+b_z z_i+\varepsilon_i$$ It can be restated as $$y_i=f(x_i,z_i)+\varepsilon_i,$$ where $f(x,z)=b_x x + b_z z$ Take a total derivative of the function $f()$: $$df=\frac{\partial f}{\partial x}dx+\frac{\partial f}{\partial z}dz$$ This is how partial derivative wrt $x$ is defined: $$\frac{\partial f}{\partial x}=\lim_{\Delta x\to 0} \frac{f(x+\Delta x,z)-f(x,z)}{\Delta x}$$ You hold $z$ constant, and step away from $x$. The partial derivative tells you haw sensitive is $f$ to a change in $x$. You can see that the beta (coefficient) is the slope on the variable of interest: $$\frac{\partial f}{\partial x}=b_x$$ In other words, in the simple linear model your coefficients are partial derivatives (slopes) with regards to the variables. That's what "holding constant" means to me intuitively.
What does it mean (intuitively) to hold other variables constant in regression?
Intuition is a tricky subject, it depends on the person's background. For instance, I studied statistics after studying mathematical physics. For me the intuition is in partial derivatives. Consider a
What does it mean (intuitively) to hold other variables constant in regression? Intuition is a tricky subject, it depends on the person's background. For instance, I studied statistics after studying mathematical physics. For me the intuition is in partial derivatives. Consider a regression model $$y_i=a+b_x x_i+b_z z_i+\varepsilon_i$$ It can be restated as $$y_i=f(x_i,z_i)+\varepsilon_i,$$ where $f(x,z)=b_x x + b_z z$ Take a total derivative of the function $f()$: $$df=\frac{\partial f}{\partial x}dx+\frac{\partial f}{\partial z}dz$$ This is how partial derivative wrt $x$ is defined: $$\frac{\partial f}{\partial x}=\lim_{\Delta x\to 0} \frac{f(x+\Delta x,z)-f(x,z)}{\Delta x}$$ You hold $z$ constant, and step away from $x$. The partial derivative tells you haw sensitive is $f$ to a change in $x$. You can see that the beta (coefficient) is the slope on the variable of interest: $$\frac{\partial f}{\partial x}=b_x$$ In other words, in the simple linear model your coefficients are partial derivatives (slopes) with regards to the variables. That's what "holding constant" means to me intuitively.
What does it mean (intuitively) to hold other variables constant in regression? Intuition is a tricky subject, it depends on the person's background. For instance, I studied statistics after studying mathematical physics. For me the intuition is in partial derivatives. Consider a
24,167
What does it mean (intuitively) to hold other variables constant in regression?
As user122677 answered, the intuition is right: In linear regression every coefficient is the amount of change in the outcome when one variable value is increased by a unit while all other variables remain constant. In other words, coefficients are partial derivatives of model prediction respect to each variable. Anyway, beware that if our model includes interactions variables, variables can't be changed without changing the interaction and therefore this interpretation of one coefficient can't make sense as a real change. The same happens with polynomial regression, where no term can change without changing other terms. About existence of those subpopulations, they don't need to exist. In some experimental designs they can exist, but in observational studies with continuous variables they are very unlikely to exist. For example: In complete designs of experiments with binary (or discrete finite) variables all combination of values of variables is in the sample. In observational studies with continuous variables each observation is very likely to get unique values for all variables and therefore it's not likely to exist two elements with all variables equal except for one.
What does it mean (intuitively) to hold other variables constant in regression?
As user122677 answered, the intuition is right: In linear regression every coefficient is the amount of change in the outcome when one variable value is increased by a unit while all other variables r
What does it mean (intuitively) to hold other variables constant in regression? As user122677 answered, the intuition is right: In linear regression every coefficient is the amount of change in the outcome when one variable value is increased by a unit while all other variables remain constant. In other words, coefficients are partial derivatives of model prediction respect to each variable. Anyway, beware that if our model includes interactions variables, variables can't be changed without changing the interaction and therefore this interpretation of one coefficient can't make sense as a real change. The same happens with polynomial regression, where no term can change without changing other terms. About existence of those subpopulations, they don't need to exist. In some experimental designs they can exist, but in observational studies with continuous variables they are very unlikely to exist. For example: In complete designs of experiments with binary (or discrete finite) variables all combination of values of variables is in the sample. In observational studies with continuous variables each observation is very likely to get unique values for all variables and therefore it's not likely to exist two elements with all variables equal except for one.
What does it mean (intuitively) to hold other variables constant in regression? As user122677 answered, the intuition is right: In linear regression every coefficient is the amount of change in the outcome when one variable value is increased by a unit while all other variables r
24,168
What does it mean (intuitively) to hold other variables constant in regression?
The intuition is correct at its basis. I'll try to answer in brief and intuitive way as well- Those sub populations necessarily exists because you hold them constant by: (a) sampling your subjects with regard to your speculated covariates OR (b) you put a constraint on its variability (i.e. variance = 0). This is done by taking 1 group (e.g. men only, blonds only, etc.) if its categorical variable or by taking an average of a given covariate (age, education, income and so on).
What does it mean (intuitively) to hold other variables constant in regression?
The intuition is correct at its basis. I'll try to answer in brief and intuitive way as well- Those sub populations necessarily exists because you hold them constant by: (a) sampling your subjects wit
What does it mean (intuitively) to hold other variables constant in regression? The intuition is correct at its basis. I'll try to answer in brief and intuitive way as well- Those sub populations necessarily exists because you hold them constant by: (a) sampling your subjects with regard to your speculated covariates OR (b) you put a constraint on its variability (i.e. variance = 0). This is done by taking 1 group (e.g. men only, blonds only, etc.) if its categorical variable or by taking an average of a given covariate (age, education, income and so on).
What does it mean (intuitively) to hold other variables constant in regression? The intuition is correct at its basis. I'll try to answer in brief and intuitive way as well- Those sub populations necessarily exists because you hold them constant by: (a) sampling your subjects wit
24,169
What is meant by PCA preserving only large pairwise distances?
Consider the following dataset: PC1 axis is maximizing the variance of the projection. So in this case it will obviously go diagonally from lower-left to upper-right corner: The largest pairwise distance in the original dataset is between these two outlying points; notice that it is almost exactly preserved in the PC1. Smaller but still substantial pairwise distances are between each of the outlying points and all other points; those are preserved reasonably well too. But if you look at the even smaller pairwise distances between the points in the central cluster, then you will see that some of them are strongly distorted. I think this gives the right intuition: PCA finds low-dimensional subspace with maximal variance. Maximal variance means that the subspace will tend to be aligned such as to go close to the points lying far away from the center; therefore the largest pairwise distances will tend to be preserved well and the smaller ones less so. However, note that this cannot be turned into a formal argument because in fact it is not necessarily true. Take a look at my answer in What's the difference between principal component analysis and multidimensional scaling? If you take the $10$ points from the figures above, construct a $10\times 10$ matrix of pairwise distances and ask what is the 1D projection that preserves the distances as close as possible, then the answer is given by MDS solution and is not given by PC1. However, if you consider a $10\times 10$ matrix of pairwise centered scalar products, then it is in fact best preserved precisely by PC1 (see my answer there for the proof). And one can argue that large pairwise distances usually mean large scalar products too; in fact, one of the MDS algorithms (classical/Torgerson MDS) is willing to explicitly make this assumption. So to summarize: PCA aims at preserving the matrix of pairwise scalar products, in the sense that the sum of squared differences between the original and reconstructed scalar products should be minimal. This means that it will rather preserve the scalar products with largest absolute value and will care less about those with small absolute value, as they add less towards the sum of squared errors. Hence, PCA preserves larger scalar products better than the smaller ones. Pairwise distances will be preserved only as much as they are similar to the scalar products which is often but not always the case. If it is the case, then larger pairwise distances will also be preserved better than the smaller ones.
What is meant by PCA preserving only large pairwise distances?
Consider the following dataset: PC1 axis is maximizing the variance of the projection. So in this case it will obviously go diagonally from lower-left to upper-right corner: The largest pairwise dis
What is meant by PCA preserving only large pairwise distances? Consider the following dataset: PC1 axis is maximizing the variance of the projection. So in this case it will obviously go diagonally from lower-left to upper-right corner: The largest pairwise distance in the original dataset is between these two outlying points; notice that it is almost exactly preserved in the PC1. Smaller but still substantial pairwise distances are between each of the outlying points and all other points; those are preserved reasonably well too. But if you look at the even smaller pairwise distances between the points in the central cluster, then you will see that some of them are strongly distorted. I think this gives the right intuition: PCA finds low-dimensional subspace with maximal variance. Maximal variance means that the subspace will tend to be aligned such as to go close to the points lying far away from the center; therefore the largest pairwise distances will tend to be preserved well and the smaller ones less so. However, note that this cannot be turned into a formal argument because in fact it is not necessarily true. Take a look at my answer in What's the difference between principal component analysis and multidimensional scaling? If you take the $10$ points from the figures above, construct a $10\times 10$ matrix of pairwise distances and ask what is the 1D projection that preserves the distances as close as possible, then the answer is given by MDS solution and is not given by PC1. However, if you consider a $10\times 10$ matrix of pairwise centered scalar products, then it is in fact best preserved precisely by PC1 (see my answer there for the proof). And one can argue that large pairwise distances usually mean large scalar products too; in fact, one of the MDS algorithms (classical/Torgerson MDS) is willing to explicitly make this assumption. So to summarize: PCA aims at preserving the matrix of pairwise scalar products, in the sense that the sum of squared differences between the original and reconstructed scalar products should be minimal. This means that it will rather preserve the scalar products with largest absolute value and will care less about those with small absolute value, as they add less towards the sum of squared errors. Hence, PCA preserves larger scalar products better than the smaller ones. Pairwise distances will be preserved only as much as they are similar to the scalar products which is often but not always the case. If it is the case, then larger pairwise distances will also be preserved better than the smaller ones.
What is meant by PCA preserving only large pairwise distances? Consider the following dataset: PC1 axis is maximizing the variance of the projection. So in this case it will obviously go diagonally from lower-left to upper-right corner: The largest pairwise dis
24,170
Is it correct to refer to a negative correlation as an 'inverse correlation'?
In physics, more so than other things, one has occasion to say directly related and inversely related when speaking of proportional relationships. That is inexact language use, of the type often called hand waving, with the advantage of helping students uncomfortable with the concept of proportionality to grasp the essentials of proportionality without using the word. More exact phraseology would be directly proportional and inversely proportional. Similarly, it would be rare to use the phrase directly proportional to the negative of something, as it is easier to grasp a negative slope, and rework a phrase to accommodate that. The concept of inverse proportionality is often approached at the beginner level by hand-waving in this fashion; in this equation as $x$ increases, $y$ decreases. Although true for definite positive inverse proportionality, this has the disadvantage of not being unique, as a negative slope direct proportionality has that same property. In general, to keep the word inverse from causing confusion, all that need be done is to say inverse ______ <-- what and fill in the blank. What operation is an inverse of what other operation depends on which algebraic procedure is being used, for example, subtraction is inverse addition, division is inverse multiplication, deconvolution is inverse convolution, a matrix inverse is the inverse of an invertible matrix, an inverse Laplace transform is the inverse of a Laplace transform, and so on. The OP's question is: Is it correct to refer to a negative correlation as an 'inverse correlation'? The answer is no, correlation is intransitive, that is, given a correlation one cannot invert the procedure. For it to be an inverse, it would have to be an inverse of some algebraic operation, and the reason for using it in this context is otherwise, namely that it is inverse hand-waving of "as $x$ increases, $y$ increases type," and hand-waving is also not a defined algebraic procedure. The unambiguous terminology for "as $x$ increases, $y$ increases" is monotonically increasing, and, to affirm that this is not local, the term strictly monotonically increasing is used, the inverse of which is then monotonically non-increasing and not strictly monotonically decreasing. Now note, the inverse of monotonically increasing is not monotonically decreasing, which demonstrates what the semantic problem is. In that light, then, we can state that the phrase "A negative correlation means that there is an inverse relationship between two variables - when one variable decreases, the other increases." is 1) gibberish of the hand-waving type, that 2) uses the word "inverse" improperly, and which when cleaned up by replacing the all of the improper language could read "A negative correlation means that the normalized covariance is negative," furthermore 3) A negative correlation does not imply monotonicity between discrete random variables, the admixture of continuous and discrete parameter types notwithstanding, and if not totally incorrect it is a stretch to define a correlation via an ordinary least squares in $y$ linear regression model that would have monotonicity. Finally, the OP questions So, do I have to eat my words? Or can I continue to tell people not to call negative relationships inverse? have the following answers 1) Yes, you are correct, and should not (never mind cannot) eat your words. Negative relationships imply additive inverses not proportional (i.e., multiplicative) inverses, and use of the word inverse in two separate contexts simultaneously is ill advised.
Is it correct to refer to a negative correlation as an 'inverse correlation'?
In physics, more so than other things, one has occasion to say directly related and inversely related when speaking of proportional relationships. That is inexact language use, of the type often calle
Is it correct to refer to a negative correlation as an 'inverse correlation'? In physics, more so than other things, one has occasion to say directly related and inversely related when speaking of proportional relationships. That is inexact language use, of the type often called hand waving, with the advantage of helping students uncomfortable with the concept of proportionality to grasp the essentials of proportionality without using the word. More exact phraseology would be directly proportional and inversely proportional. Similarly, it would be rare to use the phrase directly proportional to the negative of something, as it is easier to grasp a negative slope, and rework a phrase to accommodate that. The concept of inverse proportionality is often approached at the beginner level by hand-waving in this fashion; in this equation as $x$ increases, $y$ decreases. Although true for definite positive inverse proportionality, this has the disadvantage of not being unique, as a negative slope direct proportionality has that same property. In general, to keep the word inverse from causing confusion, all that need be done is to say inverse ______ <-- what and fill in the blank. What operation is an inverse of what other operation depends on which algebraic procedure is being used, for example, subtraction is inverse addition, division is inverse multiplication, deconvolution is inverse convolution, a matrix inverse is the inverse of an invertible matrix, an inverse Laplace transform is the inverse of a Laplace transform, and so on. The OP's question is: Is it correct to refer to a negative correlation as an 'inverse correlation'? The answer is no, correlation is intransitive, that is, given a correlation one cannot invert the procedure. For it to be an inverse, it would have to be an inverse of some algebraic operation, and the reason for using it in this context is otherwise, namely that it is inverse hand-waving of "as $x$ increases, $y$ increases type," and hand-waving is also not a defined algebraic procedure. The unambiguous terminology for "as $x$ increases, $y$ increases" is monotonically increasing, and, to affirm that this is not local, the term strictly monotonically increasing is used, the inverse of which is then monotonically non-increasing and not strictly monotonically decreasing. Now note, the inverse of monotonically increasing is not monotonically decreasing, which demonstrates what the semantic problem is. In that light, then, we can state that the phrase "A negative correlation means that there is an inverse relationship between two variables - when one variable decreases, the other increases." is 1) gibberish of the hand-waving type, that 2) uses the word "inverse" improperly, and which when cleaned up by replacing the all of the improper language could read "A negative correlation means that the normalized covariance is negative," furthermore 3) A negative correlation does not imply monotonicity between discrete random variables, the admixture of continuous and discrete parameter types notwithstanding, and if not totally incorrect it is a stretch to define a correlation via an ordinary least squares in $y$ linear regression model that would have monotonicity. Finally, the OP questions So, do I have to eat my words? Or can I continue to tell people not to call negative relationships inverse? have the following answers 1) Yes, you are correct, and should not (never mind cannot) eat your words. Negative relationships imply additive inverses not proportional (i.e., multiplicative) inverses, and use of the word inverse in two separate contexts simultaneously is ill advised.
Is it correct to refer to a negative correlation as an 'inverse correlation'? In physics, more so than other things, one has occasion to say directly related and inversely related when speaking of proportional relationships. That is inexact language use, of the type often calle
24,171
Is it correct to refer to a negative correlation as an 'inverse correlation'?
Short answer: You don't have to eat your words. In the context of ratio-scale measurement, negative correlation does not properly measure the inverse functional relationship. Correlation only measures the linear component of a statistical relationship, so if there is a nonlinear statistical relationship, it may still show up as correlation, but the correlation will not fully describe the relationship. In the case of an inverse relationship $y=1/x$ (which is nonlinear), random sampling of these values certainly be negatively correlated (assuming they are both positive$^\dagger$) so it is correct to say that an inverse relationship does lead to negative correlation. However, there are many other relationships that lead to negative correlation, so it is not correct to say that negative correlation always indicates an inverse relationship. As some commentators have pointed out, the word "inverse" can be used in its strict mathematical sense, or in a broader sense that refers generally to relationships that are "opposing" in some broad way. Notwithstanding this diversity of usage, in a mathematical context the term "inverse" does have a strict meaning, and so use of this term invites consideration of that find of function. Whilst it is true that an inverse relationship will lead to negative correlation (again, assuming positive values), describing negative correlation as "inverse correlation" is not good practice, and you are right to be a bit uncomfortable with it. The reason this is not good practice is that correlation measures the linear part of the relationship, not functions of inverse form; calling it "inverse correlation" suggest that it is measuring the inverse component of the relationship, which is not accurate. $^\dagger$ In fact, it is possible to obtain any correlation value (including positive values or zero) with data from the inverse function $(x_i, y_i,)$ with $y_i = 1/x_i$. To obtain zero correlation or positive correlation you can take some data from the negative-negative quadrant and some data from the positive-positive quadrant. In the body of the answer we rule this out by assuming both values are positive.
Is it correct to refer to a negative correlation as an 'inverse correlation'?
Short answer: You don't have to eat your words. In the context of ratio-scale measurement, negative correlation does not properly measure the inverse functional relationship. Correlation only measure
Is it correct to refer to a negative correlation as an 'inverse correlation'? Short answer: You don't have to eat your words. In the context of ratio-scale measurement, negative correlation does not properly measure the inverse functional relationship. Correlation only measures the linear component of a statistical relationship, so if there is a nonlinear statistical relationship, it may still show up as correlation, but the correlation will not fully describe the relationship. In the case of an inverse relationship $y=1/x$ (which is nonlinear), random sampling of these values certainly be negatively correlated (assuming they are both positive$^\dagger$) so it is correct to say that an inverse relationship does lead to negative correlation. However, there are many other relationships that lead to negative correlation, so it is not correct to say that negative correlation always indicates an inverse relationship. As some commentators have pointed out, the word "inverse" can be used in its strict mathematical sense, or in a broader sense that refers generally to relationships that are "opposing" in some broad way. Notwithstanding this diversity of usage, in a mathematical context the term "inverse" does have a strict meaning, and so use of this term invites consideration of that find of function. Whilst it is true that an inverse relationship will lead to negative correlation (again, assuming positive values), describing negative correlation as "inverse correlation" is not good practice, and you are right to be a bit uncomfortable with it. The reason this is not good practice is that correlation measures the linear part of the relationship, not functions of inverse form; calling it "inverse correlation" suggest that it is measuring the inverse component of the relationship, which is not accurate. $^\dagger$ In fact, it is possible to obtain any correlation value (including positive values or zero) with data from the inverse function $(x_i, y_i,)$ with $y_i = 1/x_i$. To obtain zero correlation or positive correlation you can take some data from the negative-negative quadrant and some data from the positive-positive quadrant. In the body of the answer we rule this out by assuming both values are positive.
Is it correct to refer to a negative correlation as an 'inverse correlation'? Short answer: You don't have to eat your words. In the context of ratio-scale measurement, negative correlation does not properly measure the inverse functional relationship. Correlation only measure
24,172
Is it correct to refer to a negative correlation as an 'inverse correlation'?
I guess the problem comes from a misunderstanding of the correlation term. It is highly accepted that the term correlation refers to linear correlation. Then, in a mathematical terminology, $y=-x$ the best describes negative correlation. Your term, $y=1/x$ is true if we define correlation in a non-linear space.
Is it correct to refer to a negative correlation as an 'inverse correlation'?
I guess the problem comes from a misunderstanding of the correlation term. It is highly accepted that the term correlation refers to linear correlation. Then, in a mathematical terminology, $y=-x$ the
Is it correct to refer to a negative correlation as an 'inverse correlation'? I guess the problem comes from a misunderstanding of the correlation term. It is highly accepted that the term correlation refers to linear correlation. Then, in a mathematical terminology, $y=-x$ the best describes negative correlation. Your term, $y=1/x$ is true if we define correlation in a non-linear space.
Is it correct to refer to a negative correlation as an 'inverse correlation'? I guess the problem comes from a misunderstanding of the correlation term. It is highly accepted that the term correlation refers to linear correlation. Then, in a mathematical terminology, $y=-x$ the
24,173
Is it correct to refer to a negative correlation as an 'inverse correlation'?
To me, inverse relationship (correlation) or negative relationship means the same. When one thing goes up the other goes down. The two ends of a teeter-totter have an inverse relationship (negative relationship).
Is it correct to refer to a negative correlation as an 'inverse correlation'?
To me, inverse relationship (correlation) or negative relationship means the same. When one thing goes up the other goes down. The two ends of a teeter-totter have an inverse relationship (negative r
Is it correct to refer to a negative correlation as an 'inverse correlation'? To me, inverse relationship (correlation) or negative relationship means the same. When one thing goes up the other goes down. The two ends of a teeter-totter have an inverse relationship (negative relationship).
Is it correct to refer to a negative correlation as an 'inverse correlation'? To me, inverse relationship (correlation) or negative relationship means the same. When one thing goes up the other goes down. The two ends of a teeter-totter have an inverse relationship (negative r
24,174
Is it correct to refer to a negative correlation as an 'inverse correlation'?
Yes, it is correct, and yes, you would have "to eat your words". Arguably, this question is off-topic, as it refers to semantics and consensus, which is not a technical statistical issue, but a cultural one. The top voted answers are biased because they appeal mostly to the same specific sub-culture of yours and people that visit Cross Validated. They define a narrow context in which words are defined and then make ethnocentric statements involving truth or correctness (whatever that is in a socially mediated process). The truth is that people other than statisticians or mathematically-oriented people use some concepts. And concepts are used to communicate in many cases between these same people. Correlations are one of those concepts that are extensively used by non-mathematically-trained professionals. Given that the word 'inverse' predates any mathematical-specific definition given to it, it is no surprise that it is frequently (and validly) understood as: turned in the opposite direction, having an opposite course or tendency A different question is whether it is an ambiguous concept or not. Or, the question others have replied to here, whether the mathematical definition of it applies.
Is it correct to refer to a negative correlation as an 'inverse correlation'?
Yes, it is correct, and yes, you would have "to eat your words". Arguably, this question is off-topic, as it refers to semantics and consensus, which is not a technical statistical issue, but a cultur
Is it correct to refer to a negative correlation as an 'inverse correlation'? Yes, it is correct, and yes, you would have "to eat your words". Arguably, this question is off-topic, as it refers to semantics and consensus, which is not a technical statistical issue, but a cultural one. The top voted answers are biased because they appeal mostly to the same specific sub-culture of yours and people that visit Cross Validated. They define a narrow context in which words are defined and then make ethnocentric statements involving truth or correctness (whatever that is in a socially mediated process). The truth is that people other than statisticians or mathematically-oriented people use some concepts. And concepts are used to communicate in many cases between these same people. Correlations are one of those concepts that are extensively used by non-mathematically-trained professionals. Given that the word 'inverse' predates any mathematical-specific definition given to it, it is no surprise that it is frequently (and validly) understood as: turned in the opposite direction, having an opposite course or tendency A different question is whether it is an ambiguous concept or not. Or, the question others have replied to here, whether the mathematical definition of it applies.
Is it correct to refer to a negative correlation as an 'inverse correlation'? Yes, it is correct, and yes, you would have "to eat your words". Arguably, this question is off-topic, as it refers to semantics and consensus, which is not a technical statistical issue, but a cultur
24,175
How to interpret and do forecasting using tsoutliers package and auto.arima
These comments are too long ...thus an "ANSWER" You are wrong it does not adjust and then identify ARIMA (as AUTOBOX does).It presumtively assumes no intervention adjustment and then rushes to identify an ARIMA model potentially impacted by the non-treatment of anomalies. Often one needs to adjust for both user-specified causal series and/or unspecified deterministic structure (outliers/level shifts, seasonal pulses, local time trends) before identifying the ARIMA structure. See this example of a poor dignosis which makes the mistake of unnecessarily differencing the original series while the true/correct state of nature doesn't need any differencing. Non-stationarity does not necessarily imply the need for differencing but can often suggests de-meaning i.e.the adjustment for a change in level/mean Correct forecasting is always done from the original series thus the forecast should be believable given the history. I have no idea as I do not actively use this procedure. I recommended it to you because you asked for free r based solutions NOT because I thought it was good or sufficient as ARIMA modelling is an iterative (multi-stage) self-checking process. the model suggests it thinks that the data has an ma(12) seasonal component BUT this could be simply reflect the need for a seasonal pulse. the concept of a seasonal trend is at best vague . My answer would be too obvious and self-effacing
How to interpret and do forecasting using tsoutliers package and auto.arima
These comments are too long ...thus an "ANSWER" You are wrong it does not adjust and then identify ARIMA (as AUTOBOX does).It presumtively assumes no intervention adjustment and then rushes to identi
How to interpret and do forecasting using tsoutliers package and auto.arima These comments are too long ...thus an "ANSWER" You are wrong it does not adjust and then identify ARIMA (as AUTOBOX does).It presumtively assumes no intervention adjustment and then rushes to identify an ARIMA model potentially impacted by the non-treatment of anomalies. Often one needs to adjust for both user-specified causal series and/or unspecified deterministic structure (outliers/level shifts, seasonal pulses, local time trends) before identifying the ARIMA structure. See this example of a poor dignosis which makes the mistake of unnecessarily differencing the original series while the true/correct state of nature doesn't need any differencing. Non-stationarity does not necessarily imply the need for differencing but can often suggests de-meaning i.e.the adjustment for a change in level/mean Correct forecasting is always done from the original series thus the forecast should be believable given the history. I have no idea as I do not actively use this procedure. I recommended it to you because you asked for free r based solutions NOT because I thought it was good or sufficient as ARIMA modelling is an iterative (multi-stage) self-checking process. the model suggests it thinks that the data has an ma(12) seasonal component BUT this could be simply reflect the need for a seasonal pulse. the concept of a seasonal trend is at best vague . My answer would be too obvious and self-effacing
How to interpret and do forecasting using tsoutliers package and auto.arima These comments are too long ...thus an "ANSWER" You are wrong it does not adjust and then identify ARIMA (as AUTOBOX does).It presumtively assumes no intervention adjustment and then rushes to identi
24,176
How to interpret and do forecasting using tsoutliers package and auto.arima
The package 'tsoutliers' implements the procedure described in by Chen and Liu (1993) [1]. A description of the package and the procedure is also given in this document. Briefly, the procedure consists of two main stages: Detection of outliers upon a chosen ARIMA model. Choose and/or refit the ARIMA model including the outliers detected in the previous step and remove those outliers that are not significant in the new fit. The series is then adjusted for the detected outliers and the stages (1) and (2) are repeated until no more outliers are detected or until a maximum number of iterations is reached. The first stage (detection of outliers) is also an iterative process. At the end of each iteration the residuals from the ARIMA model are adjusted for the outliers detected within this stage. The process is repeated until no more outliers are found or until a maximum number of iterations is reached (by default 4 iterations). The first three warnings that you get are related to this inner loop, i.e., the stage is exited after four iterations. You can increase this maximum number of iterations through the argument maxit.iloop in function tso. It is advisable not to set a high number of iterations in the first stage and let the process move on to the second stage where the ARIMA model is refitted or chosen again. The warnings 4 and 5 are related to the process of fitting the ARIMA model and chosen the model, respectively for functions stats::arima and forecast:auto.arima. The algorithm that maximizes the likelihood function does not always converge to a solution. You can find some details related to these issues, for example, in this post and this post [1] Chung Chen and Lon-Mu Liu (1993) "Joint Estimation of Model Parameters and Outlier Effects in Time Series", Journal of the American Statistical Association, 88(421), pp. 284-297. DOI: 10.1080/01621459.1993.10594321.
How to interpret and do forecasting using tsoutliers package and auto.arima
The package 'tsoutliers' implements the procedure described in by Chen and Liu (1993) [1]. A description of the package and the procedure is also given in this document. Briefly, the procedure consist
How to interpret and do forecasting using tsoutliers package and auto.arima The package 'tsoutliers' implements the procedure described in by Chen and Liu (1993) [1]. A description of the package and the procedure is also given in this document. Briefly, the procedure consists of two main stages: Detection of outliers upon a chosen ARIMA model. Choose and/or refit the ARIMA model including the outliers detected in the previous step and remove those outliers that are not significant in the new fit. The series is then adjusted for the detected outliers and the stages (1) and (2) are repeated until no more outliers are detected or until a maximum number of iterations is reached. The first stage (detection of outliers) is also an iterative process. At the end of each iteration the residuals from the ARIMA model are adjusted for the outliers detected within this stage. The process is repeated until no more outliers are found or until a maximum number of iterations is reached (by default 4 iterations). The first three warnings that you get are related to this inner loop, i.e., the stage is exited after four iterations. You can increase this maximum number of iterations through the argument maxit.iloop in function tso. It is advisable not to set a high number of iterations in the first stage and let the process move on to the second stage where the ARIMA model is refitted or chosen again. The warnings 4 and 5 are related to the process of fitting the ARIMA model and chosen the model, respectively for functions stats::arima and forecast:auto.arima. The algorithm that maximizes the likelihood function does not always converge to a solution. You can find some details related to these issues, for example, in this post and this post [1] Chung Chen and Lon-Mu Liu (1993) "Joint Estimation of Model Parameters and Outlier Effects in Time Series", Journal of the American Statistical Association, 88(421), pp. 284-297. DOI: 10.1080/01621459.1993.10594321.
How to interpret and do forecasting using tsoutliers package and auto.arima The package 'tsoutliers' implements the procedure described in by Chen and Liu (1993) [1]. A description of the package and the procedure is also given in this document. Briefly, the procedure consist
24,177
Why do we divide by $n-1$ when calculating sample correlation?
We do not need the Bessel correction "-1" to $n$ when we compute correlation, so I think the citated piece is wrong. Let me start by noticing that most of time we compute and use empirical $r$, or the $r$ of the sample, for both describing the sample (the statistic) and the population (the parameter estimate). This is different from variance and covariance coefficients where, typically, we introduce the Bessel correction to distinguish between the statistic and the estimate. So, consider empirical $r$. It is the cosine similarity of the centered variables ($X$ and $Y$ both were centered): $r= \frac{\sum{X_cY_c}}{\sqrt{\sum X_c^2\sum Y_c^2}}$. Notice that this formula doesn't contain neither $n$ nor $n-1$ at all, we need not to know sample size to obtain $r$. On the other hand, that same $r$ is also the covariance of the z-standardized variables ($X$ and $Y$ both were centered and then divided by their respective standard deviations $\sigma_x$ and $\sigma_y$): $r= \frac{\sum{X_zY_z}}{n-1}$. I suppose that in your question you are speaking of this formula. That Bessel correction in the denominator, which is called in the formula of covariance to unbias the estimate, - in this specific formula to compute $r$ paradoxically serves to "undo" the unbiasing correction. Indeed, recall that $\sigma_x^2$ and $\sigma_y^2$ had been computed using denominator $n-1$, the Bessel correction. If in the latter formula of $r$ you unwind $X_z$ and $Y_z$, showing how they were computed out of $X_c$ and $Y_c$ using those "n-1"-based standard deviations you'll find out that all "n-1" terms cancel each other from the formula, and you stay in the end with the above cosine formula! The "n-1" in the "covariance formula" of $r$ was needed simply to take off that older "n-1" used. If we prefer to compute those $\sigma_x^2$ and $\sigma_y^2$ based on denominator $n$ (instead of $n-1$) the formula for yet the same correlation value will be $r= \frac{\sum{X_zY_z}}{n}$. Here $n$ serves to take off that older "n" used, analogously. So, we needed $n-1$ in the denominator to cancel out the same denominator in the formulas of variances. Or needed $n$ for the same reason in case the variances were computed as biased estimates. Empirical $r$ is itself not based on the information of the sample size. As for a quest of better population estimate of $\rho$ than the empirical $r$, there we do need corrections, but there exist various approaches and a lot of different alternative formulas, and they use different corrections, usually not $n-1$ one.
Why do we divide by $n-1$ when calculating sample correlation?
We do not need the Bessel correction "-1" to $n$ when we compute correlation, so I think the citated piece is wrong. Let me start by noticing that most of time we compute and use empirical $r$, or the
Why do we divide by $n-1$ when calculating sample correlation? We do not need the Bessel correction "-1" to $n$ when we compute correlation, so I think the citated piece is wrong. Let me start by noticing that most of time we compute and use empirical $r$, or the $r$ of the sample, for both describing the sample (the statistic) and the population (the parameter estimate). This is different from variance and covariance coefficients where, typically, we introduce the Bessel correction to distinguish between the statistic and the estimate. So, consider empirical $r$. It is the cosine similarity of the centered variables ($X$ and $Y$ both were centered): $r= \frac{\sum{X_cY_c}}{\sqrt{\sum X_c^2\sum Y_c^2}}$. Notice that this formula doesn't contain neither $n$ nor $n-1$ at all, we need not to know sample size to obtain $r$. On the other hand, that same $r$ is also the covariance of the z-standardized variables ($X$ and $Y$ both were centered and then divided by their respective standard deviations $\sigma_x$ and $\sigma_y$): $r= \frac{\sum{X_zY_z}}{n-1}$. I suppose that in your question you are speaking of this formula. That Bessel correction in the denominator, which is called in the formula of covariance to unbias the estimate, - in this specific formula to compute $r$ paradoxically serves to "undo" the unbiasing correction. Indeed, recall that $\sigma_x^2$ and $\sigma_y^2$ had been computed using denominator $n-1$, the Bessel correction. If in the latter formula of $r$ you unwind $X_z$ and $Y_z$, showing how they were computed out of $X_c$ and $Y_c$ using those "n-1"-based standard deviations you'll find out that all "n-1" terms cancel each other from the formula, and you stay in the end with the above cosine formula! The "n-1" in the "covariance formula" of $r$ was needed simply to take off that older "n-1" used. If we prefer to compute those $\sigma_x^2$ and $\sigma_y^2$ based on denominator $n$ (instead of $n-1$) the formula for yet the same correlation value will be $r= \frac{\sum{X_zY_z}}{n}$. Here $n$ serves to take off that older "n" used, analogously. So, we needed $n-1$ in the denominator to cancel out the same denominator in the formulas of variances. Or needed $n$ for the same reason in case the variances were computed as biased estimates. Empirical $r$ is itself not based on the information of the sample size. As for a quest of better population estimate of $\rho$ than the empirical $r$, there we do need corrections, but there exist various approaches and a lot of different alternative formulas, and they use different corrections, usually not $n-1$ one.
Why do we divide by $n-1$ when calculating sample correlation? We do not need the Bessel correction "-1" to $n$ when we compute correlation, so I think the citated piece is wrong. Let me start by noticing that most of time we compute and use empirical $r$, or the
24,178
Fisher's exact test gives non-uniform p-values
The problem is the data are discrete so histograms can be deceiving. I coded a simulation with qqplots that show an approximate uniform distribution. library(lattice) set.seed(5545) TotalNo=300 TotalYes=450 pvalueChi=rep(NA,10000) pvalueFish=rep(NA,10000) for(i in 1:10000){ MaleAndNo=rbinom(1,TotalNo,.3) FemaleAndNo=TotalNo-MaleAndNo MaleAndYes=rbinom(1,TotalYes,.3) FemaleAndYes=TotalYes-MaleAndYes x=matrix(c(MaleAndNo,FemaleAndNo,MaleAndYes,FemaleAndYes),nrow=2,ncol=2) pvalueChi[i]=chisq.test(x)$p.value pvalueFish[i]=fisher.test(x)$p.value } dat=data.frame(pvalue=c(pvalueChi,pvalueFish),type=rep(c('Chi-Squared','Fishers'),each=10000)) histogram(~pvalue|type,data=dat,breaks=10) qqmath(~pvalue|type,data=dat,distribution=qunif, panel = function(x, ...) { panel.qqmathline(x, ...) panel.qqmath(x, ...) })
Fisher's exact test gives non-uniform p-values
The problem is the data are discrete so histograms can be deceiving. I coded a simulation with qqplots that show an approximate uniform distribution. library(lattice) set.seed(5545) TotalNo=300 Total
Fisher's exact test gives non-uniform p-values The problem is the data are discrete so histograms can be deceiving. I coded a simulation with qqplots that show an approximate uniform distribution. library(lattice) set.seed(5545) TotalNo=300 TotalYes=450 pvalueChi=rep(NA,10000) pvalueFish=rep(NA,10000) for(i in 1:10000){ MaleAndNo=rbinom(1,TotalNo,.3) FemaleAndNo=TotalNo-MaleAndNo MaleAndYes=rbinom(1,TotalYes,.3) FemaleAndYes=TotalYes-MaleAndYes x=matrix(c(MaleAndNo,FemaleAndNo,MaleAndYes,FemaleAndYes),nrow=2,ncol=2) pvalueChi[i]=chisq.test(x)$p.value pvalueFish[i]=fisher.test(x)$p.value } dat=data.frame(pvalue=c(pvalueChi,pvalueFish),type=rep(c('Chi-Squared','Fishers'),each=10000)) histogram(~pvalue|type,data=dat,breaks=10) qqmath(~pvalue|type,data=dat,distribution=qunif, panel = function(x, ...) { panel.qqmathline(x, ...) panel.qqmath(x, ...) })
Fisher's exact test gives non-uniform p-values The problem is the data are discrete so histograms can be deceiving. I coded a simulation with qqplots that show an approximate uniform distribution. library(lattice) set.seed(5545) TotalNo=300 Total
24,179
What is the difference between A/B Testing and Randomized Control Trials?
A/B testing seems to be computer geeks terminology, but the idea is of course same. You have an control version of web-page and changed one and you test if difference between some user action rate is statistically significant between versions of pages. A/B testing tests single feature combination differences when multivariate testing tests different combinations of their interactions.
What is the difference between A/B Testing and Randomized Control Trials?
A/B testing seems to be computer geeks terminology, but the idea is of course same. You have an control version of web-page and changed one and you test if difference between some user action rate is
What is the difference between A/B Testing and Randomized Control Trials? A/B testing seems to be computer geeks terminology, but the idea is of course same. You have an control version of web-page and changed one and you test if difference between some user action rate is statistically significant between versions of pages. A/B testing tests single feature combination differences when multivariate testing tests different combinations of their interactions.
What is the difference between A/B Testing and Randomized Control Trials? A/B testing seems to be computer geeks terminology, but the idea is of course same. You have an control version of web-page and changed one and you test if difference between some user action rate is
24,180
Why is it so uncommon to report confidence intervals for medians?
Your question touches on both the question of why confidence intervals are not used in these fields, and on the question of why the mean is used in preference to the median even when one would think the median is more appropriate. In psychology (and possibly sociology and urban planning too, but I'm a psychologist, so I have no real idea), no, there are no particularly good theoretical (that is, statistical) reasons for these things. Instead, it's a matter of the field having long ago fallen into a cargo-cult approach to data analysis in which p-values are the coin of the realm, means and standard deviations are thought to be accurate representations of entire vectors, and researchers imagine that significance tests tell them whether the sample effect is equal to the population effect. See these papers for some discussion and speculation about how we ended up here and why psychologists have resisted change. Cohen, J. (1994). The earth is round (p < .05). American Psychologist, 49(12), 997–1003. doi:10.1037/0003-066X.49.12.997 Cumming, G. (2014). The new statistics: Why and how. Psychological Science, 25, 7–29. doi:10.1177/0956797613504966
Why is it so uncommon to report confidence intervals for medians?
Your question touches on both the question of why confidence intervals are not used in these fields, and on the question of why the mean is used in preference to the median even when one would think t
Why is it so uncommon to report confidence intervals for medians? Your question touches on both the question of why confidence intervals are not used in these fields, and on the question of why the mean is used in preference to the median even when one would think the median is more appropriate. In psychology (and possibly sociology and urban planning too, but I'm a psychologist, so I have no real idea), no, there are no particularly good theoretical (that is, statistical) reasons for these things. Instead, it's a matter of the field having long ago fallen into a cargo-cult approach to data analysis in which p-values are the coin of the realm, means and standard deviations are thought to be accurate representations of entire vectors, and researchers imagine that significance tests tell them whether the sample effect is equal to the population effect. See these papers for some discussion and speculation about how we ended up here and why psychologists have resisted change. Cohen, J. (1994). The earth is round (p < .05). American Psychologist, 49(12), 997–1003. doi:10.1037/0003-066X.49.12.997 Cumming, G. (2014). The new statistics: Why and how. Psychological Science, 25, 7–29. doi:10.1177/0956797613504966
Why is it so uncommon to report confidence intervals for medians? Your question touches on both the question of why confidence intervals are not used in these fields, and on the question of why the mean is used in preference to the median even when one would think t
24,181
Why is it so uncommon to report confidence intervals for medians?
I think it is because confidence intervals are more difficult to estimate for quantiles, such as the median, than for the mean. Here's an intro into the subject.
Why is it so uncommon to report confidence intervals for medians?
I think it is because confidence intervals are more difficult to estimate for quantiles, such as the median, than for the mean. Here's an intro into the subject.
Why is it so uncommon to report confidence intervals for medians? I think it is because confidence intervals are more difficult to estimate for quantiles, such as the median, than for the mean. Here's an intro into the subject.
Why is it so uncommon to report confidence intervals for medians? I think it is because confidence intervals are more difficult to estimate for quantiles, such as the median, than for the mean. Here's an intro into the subject.
24,182
Approximate distribution of product of N normal i.i.d.? Special case μ≈0
It is possible to obtain an exact solution in the zero-mean case (part B). The Problem Let $(X_1, \dots, X_n)$ denote $n$ iid $N(0,\sigma^2)$ variables, each with common pdf $f(x)$: We seek the pdf of $\prod_{i=1}^n X_i$, for $n = 2, 3, \dots$ Solution The pdf of the product of two such Normals is simply: ... where I am using the TransformProduct function from the mathStatica package for Mathematica. The domain of support is: The product of 3, 4, 5 and 6 Normals is obtained by iteratively applying the same function (here four times): ... where MeijerG denotes the Meijer G function By induction, the pdf of the product of $n$ iid $N(0,\sigma^2)$ random variables is: $$\frac{1}{(2 \pi )^{\frac{n}{2}} \sigma ^n} \text{MeijerG}[\{ \{ \}, \{ \} \}, \{ \{0_1, \dots, 0_n \}, \{ \} \}, \frac{x^2}{2^n \sigma ^{2 n}}] \quad \quad \text{ for } x \in \mathbb{R} $$ Quick Monte Carlo check Here is a quick check comparing: the theoretical pdf just obtained (when $n = 6$ and $\sigma=3$): RED DASHED curve to the empirical Monte Carlo pdf: squiggly BLUE curve Looks fine! [ the blue squiggly Monte curve is obscuring the exact red-dashed curve ]
Approximate distribution of product of N normal i.i.d.? Special case μ≈0
It is possible to obtain an exact solution in the zero-mean case (part B). The Problem Let $(X_1, \dots, X_n)$ denote $n$ iid $N(0,\sigma^2)$ variables, each with common pdf $f(x)$: We seek the pdf
Approximate distribution of product of N normal i.i.d.? Special case μ≈0 It is possible to obtain an exact solution in the zero-mean case (part B). The Problem Let $(X_1, \dots, X_n)$ denote $n$ iid $N(0,\sigma^2)$ variables, each with common pdf $f(x)$: We seek the pdf of $\prod_{i=1}^n X_i$, for $n = 2, 3, \dots$ Solution The pdf of the product of two such Normals is simply: ... where I am using the TransformProduct function from the mathStatica package for Mathematica. The domain of support is: The product of 3, 4, 5 and 6 Normals is obtained by iteratively applying the same function (here four times): ... where MeijerG denotes the Meijer G function By induction, the pdf of the product of $n$ iid $N(0,\sigma^2)$ random variables is: $$\frac{1}{(2 \pi )^{\frac{n}{2}} \sigma ^n} \text{MeijerG}[\{ \{ \}, \{ \} \}, \{ \{0_1, \dots, 0_n \}, \{ \} \}, \frac{x^2}{2^n \sigma ^{2 n}}] \quad \quad \text{ for } x \in \mathbb{R} $$ Quick Monte Carlo check Here is a quick check comparing: the theoretical pdf just obtained (when $n = 6$ and $\sigma=3$): RED DASHED curve to the empirical Monte Carlo pdf: squiggly BLUE curve Looks fine! [ the blue squiggly Monte curve is obscuring the exact red-dashed curve ]
Approximate distribution of product of N normal i.i.d.? Special case μ≈0 It is possible to obtain an exact solution in the zero-mean case (part B). The Problem Let $(X_1, \dots, X_n)$ denote $n$ iid $N(0,\sigma^2)$ variables, each with common pdf $f(x)$: We seek the pdf
24,183
What is out of time validation in logistic regression model?
Out-of-time validation is just out-of-sample validation on a later data-set than that on which you fitted your model; where application of a model to a population changing over time is the concern, rather than application to populations of different cities, species, materials, or whatever. So to do it you'd need samples from different times (& note that if you had those at the time of fitting the model it'd usually be more useful to use the whole data-set & include time in the model). It's anyone's guess whether it would have alerted you to a problem in this particular case. The calibration of a model often degrades much faster than its discrimination (I'd bet that changing the probability cut-off used to predict a churner would have resulted in greater accuracy—are you monitoring discrimination & calibration?), so re-calibration once in a while can be helpful. See Steyerberg et al. (2004), "Validation and updating of predictive logistic regression: a study on sample size and shrinkage", Statistics in Medicine, 23, p.2567.
What is out of time validation in logistic regression model?
Out-of-time validation is just out-of-sample validation on a later data-set than that on which you fitted your model; where application of a model to a population changing over time is the concern, ra
What is out of time validation in logistic regression model? Out-of-time validation is just out-of-sample validation on a later data-set than that on which you fitted your model; where application of a model to a population changing over time is the concern, rather than application to populations of different cities, species, materials, or whatever. So to do it you'd need samples from different times (& note that if you had those at the time of fitting the model it'd usually be more useful to use the whole data-set & include time in the model). It's anyone's guess whether it would have alerted you to a problem in this particular case. The calibration of a model often degrades much faster than its discrimination (I'd bet that changing the probability cut-off used to predict a churner would have resulted in greater accuracy—are you monitoring discrimination & calibration?), so re-calibration once in a while can be helpful. See Steyerberg et al. (2004), "Validation and updating of predictive logistic regression: a study on sample size and shrinkage", Statistics in Medicine, 23, p.2567.
What is out of time validation in logistic regression model? Out-of-time validation is just out-of-sample validation on a later data-set than that on which you fitted your model; where application of a model to a population changing over time is the concern, ra
24,184
When to correct p-values in multiple comparisons?
Answer to question 1 You need to adjust for multiple comparisons if you care about the probability at which you will make a Type I error. A simple combination of metaphor/thought experiment may help: Imagine that you want to win the lottery. This lottery, strangely enough, gives you a 0.05 chance of winning (i.e. 1 in 20). M is the cost of the ticket in this lottery, meaning that your expected return for a single lottery call is M/20. Now even stranger, imagine that for unknown reasons, this cost, M, allows you to have as many lottery tickets as you want (or at least more than two). Thinking to yourself "the more you play, the more you win" you grab a bunch of tickets. Your expected return on a lottery call is no longer M/20, but something a fair bit larger. Now replace "winning the lottery" with "making a Type I error." If you do not care about errors, and you don't care about people repeatedly and mockingly directing your attention to a certain cartoon about jellybeans, then go ahead and do not adjust for multiple comparisons. The "same data" issue arises in family-wise error correction methods (e.g. Bonferroni, Holm-Sidák, etc.), since the concept of "family" is somewhat vague. However, the false discovery rate methods (e.g. Benjamini and Hochberg, Benjamini and Yeuketeli, etc.) have a property that their results are robust across different groups of inferences. Answer to question 2 Most pairwise tests require correction, although there are stylistic and disciplinary differences in what gets called a test. For example, some folks refer to "Bonferroni t tests" (which is a neat trick, since Bonferroni developed neither the t test, nor the Bonferroni adjustment for multiple comparisons :). I personally find this dissatisfying, as (1) I would like to make a distinction between conducting a group of statistical tests, and adjusting for multiple comparisons in order to effectively understand the inferences I am making, and (2) when someone comes along with a new pairwise test founded on a solid definition of $\alpha$, then I know I can perform adjustments for multiple comparisons.
When to correct p-values in multiple comparisons?
Answer to question 1 You need to adjust for multiple comparisons if you care about the probability at which you will make a Type I error. A simple combination of metaphor/thought experiment may help:
When to correct p-values in multiple comparisons? Answer to question 1 You need to adjust for multiple comparisons if you care about the probability at which you will make a Type I error. A simple combination of metaphor/thought experiment may help: Imagine that you want to win the lottery. This lottery, strangely enough, gives you a 0.05 chance of winning (i.e. 1 in 20). M is the cost of the ticket in this lottery, meaning that your expected return for a single lottery call is M/20. Now even stranger, imagine that for unknown reasons, this cost, M, allows you to have as many lottery tickets as you want (or at least more than two). Thinking to yourself "the more you play, the more you win" you grab a bunch of tickets. Your expected return on a lottery call is no longer M/20, but something a fair bit larger. Now replace "winning the lottery" with "making a Type I error." If you do not care about errors, and you don't care about people repeatedly and mockingly directing your attention to a certain cartoon about jellybeans, then go ahead and do not adjust for multiple comparisons. The "same data" issue arises in family-wise error correction methods (e.g. Bonferroni, Holm-Sidák, etc.), since the concept of "family" is somewhat vague. However, the false discovery rate methods (e.g. Benjamini and Hochberg, Benjamini and Yeuketeli, etc.) have a property that their results are robust across different groups of inferences. Answer to question 2 Most pairwise tests require correction, although there are stylistic and disciplinary differences in what gets called a test. For example, some folks refer to "Bonferroni t tests" (which is a neat trick, since Bonferroni developed neither the t test, nor the Bonferroni adjustment for multiple comparisons :). I personally find this dissatisfying, as (1) I would like to make a distinction between conducting a group of statistical tests, and adjusting for multiple comparisons in order to effectively understand the inferences I am making, and (2) when someone comes along with a new pairwise test founded on a solid definition of $\alpha$, then I know I can perform adjustments for multiple comparisons.
When to correct p-values in multiple comparisons? Answer to question 1 You need to adjust for multiple comparisons if you care about the probability at which you will make a Type I error. A simple combination of metaphor/thought experiment may help:
24,185
Multiple comparisons in a non-parametric test
You are looking for Dunn's test (or, say, the Conover-Iman test). This is very much like a set of pairwise rank sum tests, but Dunn's versions (1) accounts for the pooled variance implied by the null hypothesis, and (2) retains the ranking used to conduct the Kruskal-Wallis test. Performing garden variety Wilcoxon/Mann-Whitney rank sum tests ignores with these issues. One can, of course, perform family-wise error rate or false discovery rate corrections for multiple comparisons with Dunn's test. Dunn's test is implemented for Stata in the package dunntest (from Stata type net describe dunntest, from(https://alexisdinno.com/stata) whilst connected to the Internet), and for R in the package dunn.test; both packages includes many multiple comparison's adjustment options. One might also perform Dunn's test in SAS using Elliott and Hynan's macro, KW_MC. As I wrote in a related CV question: there are a few less well-known post hoc pairwise tests to follow a rejected Kruskal-Wallis, including Conover-Iman (like Dunn, but based on the t distribution, rather than the z distribution, strictly more powerful than Dunn's test, and also implemented for Stata in the package conovertest, and for R in the conover.test package), and the Dwass-Steel-Citchlow-Fligner tests. References Dunn, O. J. (1964). Multiple comparisons using rank sums. Technometrics, 6(3):241–252.
Multiple comparisons in a non-parametric test
You are looking for Dunn's test (or, say, the Conover-Iman test). This is very much like a set of pairwise rank sum tests, but Dunn's versions (1) accounts for the pooled variance implied by the null
Multiple comparisons in a non-parametric test You are looking for Dunn's test (or, say, the Conover-Iman test). This is very much like a set of pairwise rank sum tests, but Dunn's versions (1) accounts for the pooled variance implied by the null hypothesis, and (2) retains the ranking used to conduct the Kruskal-Wallis test. Performing garden variety Wilcoxon/Mann-Whitney rank sum tests ignores with these issues. One can, of course, perform family-wise error rate or false discovery rate corrections for multiple comparisons with Dunn's test. Dunn's test is implemented for Stata in the package dunntest (from Stata type net describe dunntest, from(https://alexisdinno.com/stata) whilst connected to the Internet), and for R in the package dunn.test; both packages includes many multiple comparison's adjustment options. One might also perform Dunn's test in SAS using Elliott and Hynan's macro, KW_MC. As I wrote in a related CV question: there are a few less well-known post hoc pairwise tests to follow a rejected Kruskal-Wallis, including Conover-Iman (like Dunn, but based on the t distribution, rather than the z distribution, strictly more powerful than Dunn's test, and also implemented for Stata in the package conovertest, and for R in the conover.test package), and the Dwass-Steel-Citchlow-Fligner tests. References Dunn, O. J. (1964). Multiple comparisons using rank sums. Technometrics, 6(3):241–252.
Multiple comparisons in a non-parametric test You are looking for Dunn's test (or, say, the Conover-Iman test). This is very much like a set of pairwise rank sum tests, but Dunn's versions (1) accounts for the pooled variance implied by the null
24,186
Should I bootstrap at the cluster level or the individual level?
Imagine that you conducted a study about children educational achievements. You took a random sample of schools from some area and from each school one class was included in the study. You conducted analysis and now want to use bootstrap to obtain confidence intervals for your estimates. How to do it? First, notice that your data is hierarchical, it has several levels: schools, classes within schools, and students within classes. Since there is only one class per school, so the second level is nonexistent in your data. We can assume that there are some similarities within schools and differences between schools. If there are similarities within schools then if you sampled pupils at random, not taking into consideration their school membership you could possibly destroy the hierarchical structure of your data. In general, there are several options: sample students with replacement, sample whole schools with replacement, first sample schools with replacement and then sample students (a) with replacement, or (b) without replacement. It appears that the first approach is the worst one. Recall that bootstrap sampling should somehow imitate the sampling process in your study and you were sampling schools rather than individual students. Choosing between (2) and (3) is more complicated, but hopefully you can find research papers considering this topic (e.g. Rena et al. 2010, Field and Welsh, 2007). Generally options (2) or (3b) are preferable as it seems that including too much levels of sampling with replacement leads to biased results. You can find more information about this topic also in books by Efron and Tibshirani (1994) and Davison and Hinkley (1997). Notice that we have similar problem with bootstrapping time-series data and in this case we also rather sample whole blocks of series (e.g. whole season if we assume seasonality) rather than individual observations because otherwise the time structure would get destroyed. In practice there is no one-size-fits-all solution but with complicated data structures you should choose such bootstrap sampling scheme that best fits your data and your problem and if possible use a simulation study to compare different solutions. Davison, A.C. and Hinkley, D.V. (1997). Bootstrap Methods and Their Application. Cambridge. Efron, B. and Tibshirani, R.J. (1994). An Introduction to the Bootstrap. CRC Press. Ren, S., Lai, H., Tong, W., Aminzadeh, M., Hou, X., & Lai, S. (2010). Nonparametric bootstrapping for hierarchical data. Journal of Applied Statistics, 37(9), 1487-1498. Field, C. A., & Welsh, A. H. (2007). Bootstrapping clustered data. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 69(3), 369-390.
Should I bootstrap at the cluster level or the individual level?
Imagine that you conducted a study about children educational achievements. You took a random sample of schools from some area and from each school one class was included in the study. You conducted a
Should I bootstrap at the cluster level or the individual level? Imagine that you conducted a study about children educational achievements. You took a random sample of schools from some area and from each school one class was included in the study. You conducted analysis and now want to use bootstrap to obtain confidence intervals for your estimates. How to do it? First, notice that your data is hierarchical, it has several levels: schools, classes within schools, and students within classes. Since there is only one class per school, so the second level is nonexistent in your data. We can assume that there are some similarities within schools and differences between schools. If there are similarities within schools then if you sampled pupils at random, not taking into consideration their school membership you could possibly destroy the hierarchical structure of your data. In general, there are several options: sample students with replacement, sample whole schools with replacement, first sample schools with replacement and then sample students (a) with replacement, or (b) without replacement. It appears that the first approach is the worst one. Recall that bootstrap sampling should somehow imitate the sampling process in your study and you were sampling schools rather than individual students. Choosing between (2) and (3) is more complicated, but hopefully you can find research papers considering this topic (e.g. Rena et al. 2010, Field and Welsh, 2007). Generally options (2) or (3b) are preferable as it seems that including too much levels of sampling with replacement leads to biased results. You can find more information about this topic also in books by Efron and Tibshirani (1994) and Davison and Hinkley (1997). Notice that we have similar problem with bootstrapping time-series data and in this case we also rather sample whole blocks of series (e.g. whole season if we assume seasonality) rather than individual observations because otherwise the time structure would get destroyed. In practice there is no one-size-fits-all solution but with complicated data structures you should choose such bootstrap sampling scheme that best fits your data and your problem and if possible use a simulation study to compare different solutions. Davison, A.C. and Hinkley, D.V. (1997). Bootstrap Methods and Their Application. Cambridge. Efron, B. and Tibshirani, R.J. (1994). An Introduction to the Bootstrap. CRC Press. Ren, S., Lai, H., Tong, W., Aminzadeh, M., Hou, X., & Lai, S. (2010). Nonparametric bootstrapping for hierarchical data. Journal of Applied Statistics, 37(9), 1487-1498. Field, C. A., & Welsh, A. H. (2007). Bootstrapping clustered data. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 69(3), 369-390.
Should I bootstrap at the cluster level or the individual level? Imagine that you conducted a study about children educational achievements. You took a random sample of schools from some area and from each school one class was included in the study. You conducted a
24,187
Should I bootstrap at the cluster level or the individual level?
The answer seems to be that the resampling process needs to take account of the structure of the data. There is a nice explanation here (along with some R code to implement this). http://biostat.mc.vanderbilt.edu/wiki/Main/HowToBootstrapCorrelatedData Thanks to the pointer from the UCLA Statistical Consulting Group. I have written a speedier (but less flexible) version of the code snippet linked to above - check here for updates and details. rsample2 <- function(data=tdt, id.unit=id.u, id.cluster=id.c) { require(data.table) setkeyv(tdt,id.cluster) # Generate within cluster ID (needed for the sample command) tdt[, "id.within" := .SD[,.I], by=id.cluster, with=FALSE] # Random sample of sites bdt <- data.table(sample(unique(tdt[[id.cluster]]), replace=TRUE)) setnames(bdt,"V1",id.cluster) setkeyv(bdt,id.cluster) # Use random sample of sites to select from original data # then # within each site sample with replacement using the within site ID bdt <- tdt[bdt, .SD[sample(.SD$id.within, replace=TRUE)],by=.EACHI] # return data sampled with replacement respecting clusters bdt[, id.within := NULL] # drop id.within return(bdt) }
Should I bootstrap at the cluster level or the individual level?
The answer seems to be that the resampling process needs to take account of the structure of the data. There is a nice explanation here (along with some R code to implement this). http://biostat.mc.va
Should I bootstrap at the cluster level or the individual level? The answer seems to be that the resampling process needs to take account of the structure of the data. There is a nice explanation here (along with some R code to implement this). http://biostat.mc.vanderbilt.edu/wiki/Main/HowToBootstrapCorrelatedData Thanks to the pointer from the UCLA Statistical Consulting Group. I have written a speedier (but less flexible) version of the code snippet linked to above - check here for updates and details. rsample2 <- function(data=tdt, id.unit=id.u, id.cluster=id.c) { require(data.table) setkeyv(tdt,id.cluster) # Generate within cluster ID (needed for the sample command) tdt[, "id.within" := .SD[,.I], by=id.cluster, with=FALSE] # Random sample of sites bdt <- data.table(sample(unique(tdt[[id.cluster]]), replace=TRUE)) setnames(bdt,"V1",id.cluster) setkeyv(bdt,id.cluster) # Use random sample of sites to select from original data # then # within each site sample with replacement using the within site ID bdt <- tdt[bdt, .SD[sample(.SD$id.within, replace=TRUE)],by=.EACHI] # return data sampled with replacement respecting clusters bdt[, id.within := NULL] # drop id.within return(bdt) }
Should I bootstrap at the cluster level or the individual level? The answer seems to be that the resampling process needs to take account of the structure of the data. There is a nice explanation here (along with some R code to implement this). http://biostat.mc.va
24,188
How do we predict rare events?
The standard approach is "extreme value theory", there is an excellent book on the subject by Stuart Coles (although the current price seems rather, err ... extreme). The reason you are unlikely to get good results using classification or regression methods is that these methods typically depend on predicting the conditional mean of the data, and extreme events are usually caused by the conjunction of "random" factors all aligning in the same direction, so they are in the tails of the distribution of plausible outcomes, which are usually a long way from the conditional mean. What you can do is to predict the whole conditional distribution, rather than just its mean, and get some information on the probability of an extreme event by integrating the tail of the distribution above some threshold. I found this worked well in an application on statistical downscaling of heavy precipitation.
How do we predict rare events?
The standard approach is "extreme value theory", there is an excellent book on the subject by Stuart Coles (although the current price seems rather, err ... extreme). The reason you are unlikely to ge
How do we predict rare events? The standard approach is "extreme value theory", there is an excellent book on the subject by Stuart Coles (although the current price seems rather, err ... extreme). The reason you are unlikely to get good results using classification or regression methods is that these methods typically depend on predicting the conditional mean of the data, and extreme events are usually caused by the conjunction of "random" factors all aligning in the same direction, so they are in the tails of the distribution of plausible outcomes, which are usually a long way from the conditional mean. What you can do is to predict the whole conditional distribution, rather than just its mean, and get some information on the probability of an extreme event by integrating the tail of the distribution above some threshold. I found this worked well in an application on statistical downscaling of heavy precipitation.
How do we predict rare events? The standard approach is "extreme value theory", there is an excellent book on the subject by Stuart Coles (although the current price seems rather, err ... extreme). The reason you are unlikely to ge
24,189
Do the mean and the variance always exist for exponential family distributions?
Taking $s=1$, $h(x)=1$, $\eta_1(\theta)=\theta$, and $T_1(x)=\log(|x|+1)$ gives $A(\theta)=\log\left(-2/(1+\theta)\right)$ provided $\theta \lt -1$, producing $$f_X(x|\theta) = \exp\left(\theta\log(|x|+1) - \log\left(\frac{-2}{1+\theta}\right)\right) = -\frac{1+\theta}{2}(1+|x|)^\theta. $$ Graphs of $f_X(\ |\theta)$ are shown for $\theta=-3/2, -2, -3$ (in blue, red, and gold, respectively). Clearly the absolute moments of weights $\alpha=-1-\theta$ or greater do not exist, because the integrand $|x|^\alpha f_X(x|\theta)$, which is asymptotically proportional to $|x|^{\alpha+\theta}$, will produce a convergent integral at the limits $\pm\infty$ if and only if $\alpha+\theta\lt -1$. In particular, when $-2 \le \theta \lt -1,$ this distribution does not even have a mean (and certainly not a variance).
Do the mean and the variance always exist for exponential family distributions?
Taking $s=1$, $h(x)=1$, $\eta_1(\theta)=\theta$, and $T_1(x)=\log(|x|+1)$ gives $A(\theta)=\log\left(-2/(1+\theta)\right)$ provided $\theta \lt -1$, producing $$f_X(x|\theta) = \exp\left(\theta\log(|x
Do the mean and the variance always exist for exponential family distributions? Taking $s=1$, $h(x)=1$, $\eta_1(\theta)=\theta$, and $T_1(x)=\log(|x|+1)$ gives $A(\theta)=\log\left(-2/(1+\theta)\right)$ provided $\theta \lt -1$, producing $$f_X(x|\theta) = \exp\left(\theta\log(|x|+1) - \log\left(\frac{-2}{1+\theta}\right)\right) = -\frac{1+\theta}{2}(1+|x|)^\theta. $$ Graphs of $f_X(\ |\theta)$ are shown for $\theta=-3/2, -2, -3$ (in blue, red, and gold, respectively). Clearly the absolute moments of weights $\alpha=-1-\theta$ or greater do not exist, because the integrand $|x|^\alpha f_X(x|\theta)$, which is asymptotically proportional to $|x|^{\alpha+\theta}$, will produce a convergent integral at the limits $\pm\infty$ if and only if $\alpha+\theta\lt -1$. In particular, when $-2 \le \theta \lt -1,$ this distribution does not even have a mean (and certainly not a variance).
Do the mean and the variance always exist for exponential family distributions? Taking $s=1$, $h(x)=1$, $\eta_1(\theta)=\theta$, and $T_1(x)=\log(|x|+1)$ gives $A(\theta)=\log\left(-2/(1+\theta)\right)$ provided $\theta \lt -1$, producing $$f_X(x|\theta) = \exp\left(\theta\log(|x
24,190
Test whether two multinomial samples come from the same distribution
You correctly performed a $\chi^2$-test of independence, so the only problem is in the formulation of its hypotheses and the interpretation of the test result: The $\chi^2$-test of independence tests the null hypothesis "The two color distributions are equal" versus the working hypothesis of any difference. The p value is smaller than the prespecified level $\alpha$, so you reject the null hypothesis and claim with about $(1-\alpha)\cdot 100\%$ confidence that the colors are differently distributed between urns. The term "independence"-test is sometimes a bit confusing but it is more clear if you consider the "raw" data behind the contingency table: Color Urn Blue 1 Blue 2 Green 2 Red 1 Blue 1 ... The null hypothesis that the variable "Urn" is independent of the random variable "Color" is equivalent to the null hypothesis stated above. So it's not about independence of the two color distributions but about independence of color and urn. Note that a large p value wouldn't mean that the color distributions were equal. This would be much harder to show by "classic" statistical methods.
Test whether two multinomial samples come from the same distribution
You correctly performed a $\chi^2$-test of independence, so the only problem is in the formulation of its hypotheses and the interpretation of the test result: The $\chi^2$-test of independence tests
Test whether two multinomial samples come from the same distribution You correctly performed a $\chi^2$-test of independence, so the only problem is in the formulation of its hypotheses and the interpretation of the test result: The $\chi^2$-test of independence tests the null hypothesis "The two color distributions are equal" versus the working hypothesis of any difference. The p value is smaller than the prespecified level $\alpha$, so you reject the null hypothesis and claim with about $(1-\alpha)\cdot 100\%$ confidence that the colors are differently distributed between urns. The term "independence"-test is sometimes a bit confusing but it is more clear if you consider the "raw" data behind the contingency table: Color Urn Blue 1 Blue 2 Green 2 Red 1 Blue 1 ... The null hypothesis that the variable "Urn" is independent of the random variable "Color" is equivalent to the null hypothesis stated above. So it's not about independence of the two color distributions but about independence of color and urn. Note that a large p value wouldn't mean that the color distributions were equal. This would be much harder to show by "classic" statistical methods.
Test whether two multinomial samples come from the same distribution You correctly performed a $\chi^2$-test of independence, so the only problem is in the formulation of its hypotheses and the interpretation of the test result: The $\chi^2$-test of independence tests
24,191
Test whether two multinomial samples come from the same distribution
Suppose Y | X = 0 and Y | X = 1 are two multinomial distributions indicated by X. Then: P(Y = y| X = x) = P(Y = y) implies independence, it also implies: P(Y = y| X = 0) = P(Y = y| X = 1) and vice versa. i.e., test of independence equivalent to test of same distribution
Test whether two multinomial samples come from the same distribution
Suppose Y | X = 0 and Y | X = 1 are two multinomial distributions indicated by X. Then: P(Y = y| X = x) = P(Y = y) implies independence, it also implies: P(Y = y| X = 0) = P(Y = y| X = 1) and vice ver
Test whether two multinomial samples come from the same distribution Suppose Y | X = 0 and Y | X = 1 are two multinomial distributions indicated by X. Then: P(Y = y| X = x) = P(Y = y) implies independence, it also implies: P(Y = y| X = 0) = P(Y = y| X = 1) and vice versa. i.e., test of independence equivalent to test of same distribution
Test whether two multinomial samples come from the same distribution Suppose Y | X = 0 and Y | X = 1 are two multinomial distributions indicated by X. Then: P(Y = y| X = x) = P(Y = y) implies independence, it also implies: P(Y = y| X = 0) = P(Y = y| X = 1) and vice ver
24,192
MLE for triangle distribution?
Is it possible to apply the usual MLE procedure to the triangle distribution? Certainly! Though there are some oddities to deal with, it's possible to compute MLEs in this case. However, if by 'the usual procedure' you mean 'take derivatives of the log-likelihood and set it equal to zero', then maybe not. What is the exact nature of the obstruction to MLE here (if indeed there is one)? Have you tried drawing the likelihood? -- Followup after clarification of question: The question about drawing the likelihood was not idle commentary, but central to the issue. MLE will involve taking a derivative No. MLE involves finding the argmax of a function. That only involves finding the zeros of a derivative under certain conditions... which don't hold here. At best, if you manage to do that you'll identify a few local minima. As my earlier question suggested, look at the likelihood. Here's a sample, $y$ of 10 observations from a triangular distribution on (0,1): 0.5067705 0.2345473 0.4121822 0.3780912 0.3085981 0.3867052 0.4177924 0.5009028 0.8420312 0.2588613 Here's the likelihood and log-likelihood functions for $c$ on that data: The grey lines mark the data values (I should probably have generated a new sample to get better separation of the values). The black dots mark the likelihood / log-likelihood of those values. Here's a zoom in near the maximum of the likelihood, to see more detail: As you can see from the likelihood, at many of the order statistics, the likelihood function has sharp 'corners' - points where the derivative doesn't exist (which is no surprise - the original pdf has a corner and we're taking a product of pdfs). This (that there are cusps at order statistics) is the case with the triangular distribution, and the maximum always occurs at one of the order statistics. (That cusps occur at order statistics isn't unique to the triangular distributions; for example the Laplace density has a corner and as a result the likelihood for its center has one at each order statistic.) As it happens in my sample, the maximum occurs as the fourth order statistic, 0.3780912 So to find the MLE of $c$ on (0,1), just find the likelihood at each observation. The one with the biggest likelihood is the MLE of $c$. A useful reference is chapter 1 of "Beyond Beta" by Johan van Dorp and Samuel Kotz. As it happens, Chapter 1 is a free 'sample' chapter for the book - you can download it here. There's a lovely little paper by Eddie Oliver on this issue with the triangular distribution, I think in American Statistician (which makes basically the same points; I think it was in a Teacher's Corner). If I can manage to locate it I'll give it as a reference. Edit: here it is: E. H. Oliver (1972), A Maximum Likelihood Oddity, The American Statistician, Vol 26, Issue 3, June, p43-44 (publisher link) If you can easily get hold of it, it's worth a look, but that Dorp and Kotz chapter covers most of the relevant issues so it's not crucial. By way of followup on the question in comments - even if you could find some way of 'smoothing off' the corners, you'd still have to deal with the fact that you can get multiple local maxima: It might, however, be possible to find estimators that have very good properties (better than method of moments), which you can write down easily. But ML on the triangular on (0,1) is a few lines of code. If it's a matter of huge amounts of data, that, too, can be dealt with, but would be another question, I think. For example, not every data point can be a maximum, which reduces the work, and there are some other savings that can be made.
MLE for triangle distribution?
Is it possible to apply the usual MLE procedure to the triangle distribution? Certainly! Though there are some oddities to deal with, it's possible to compute MLEs in this case. However, if by 'the
MLE for triangle distribution? Is it possible to apply the usual MLE procedure to the triangle distribution? Certainly! Though there are some oddities to deal with, it's possible to compute MLEs in this case. However, if by 'the usual procedure' you mean 'take derivatives of the log-likelihood and set it equal to zero', then maybe not. What is the exact nature of the obstruction to MLE here (if indeed there is one)? Have you tried drawing the likelihood? -- Followup after clarification of question: The question about drawing the likelihood was not idle commentary, but central to the issue. MLE will involve taking a derivative No. MLE involves finding the argmax of a function. That only involves finding the zeros of a derivative under certain conditions... which don't hold here. At best, if you manage to do that you'll identify a few local minima. As my earlier question suggested, look at the likelihood. Here's a sample, $y$ of 10 observations from a triangular distribution on (0,1): 0.5067705 0.2345473 0.4121822 0.3780912 0.3085981 0.3867052 0.4177924 0.5009028 0.8420312 0.2588613 Here's the likelihood and log-likelihood functions for $c$ on that data: The grey lines mark the data values (I should probably have generated a new sample to get better separation of the values). The black dots mark the likelihood / log-likelihood of those values. Here's a zoom in near the maximum of the likelihood, to see more detail: As you can see from the likelihood, at many of the order statistics, the likelihood function has sharp 'corners' - points where the derivative doesn't exist (which is no surprise - the original pdf has a corner and we're taking a product of pdfs). This (that there are cusps at order statistics) is the case with the triangular distribution, and the maximum always occurs at one of the order statistics. (That cusps occur at order statistics isn't unique to the triangular distributions; for example the Laplace density has a corner and as a result the likelihood for its center has one at each order statistic.) As it happens in my sample, the maximum occurs as the fourth order statistic, 0.3780912 So to find the MLE of $c$ on (0,1), just find the likelihood at each observation. The one with the biggest likelihood is the MLE of $c$. A useful reference is chapter 1 of "Beyond Beta" by Johan van Dorp and Samuel Kotz. As it happens, Chapter 1 is a free 'sample' chapter for the book - you can download it here. There's a lovely little paper by Eddie Oliver on this issue with the triangular distribution, I think in American Statistician (which makes basically the same points; I think it was in a Teacher's Corner). If I can manage to locate it I'll give it as a reference. Edit: here it is: E. H. Oliver (1972), A Maximum Likelihood Oddity, The American Statistician, Vol 26, Issue 3, June, p43-44 (publisher link) If you can easily get hold of it, it's worth a look, but that Dorp and Kotz chapter covers most of the relevant issues so it's not crucial. By way of followup on the question in comments - even if you could find some way of 'smoothing off' the corners, you'd still have to deal with the fact that you can get multiple local maxima: It might, however, be possible to find estimators that have very good properties (better than method of moments), which you can write down easily. But ML on the triangular on (0,1) is a few lines of code. If it's a matter of huge amounts of data, that, too, can be dealt with, but would be another question, I think. For example, not every data point can be a maximum, which reduces the work, and there are some other savings that can be made.
MLE for triangle distribution? Is it possible to apply the usual MLE procedure to the triangle distribution? Certainly! Though there are some oddities to deal with, it's possible to compute MLEs in this case. However, if by 'the
24,193
Dynamic factor analysis vs state space model
I did not see your question before. Yes, dynamic factor analysis can bee seen as a particular case of state-space model. It makes observations dependent of a small dimensional state vector (small relative to the dimension of the observation vector). So it is the same idea as in ordinary factor analysis, plus time dependence. The "factors" may have any time dynamics. Several R packages, if you use R, will let you specify a general dynamic factor analysis model, including for instance dlm or KFAS.
Dynamic factor analysis vs state space model
I did not see your question before. Yes, dynamic factor analysis can bee seen as a particular case of state-space model. It makes observations dependent of a small dimensional state vector (small rela
Dynamic factor analysis vs state space model I did not see your question before. Yes, dynamic factor analysis can bee seen as a particular case of state-space model. It makes observations dependent of a small dimensional state vector (small relative to the dimension of the observation vector). So it is the same idea as in ordinary factor analysis, plus time dependence. The "factors" may have any time dynamics. Several R packages, if you use R, will let you specify a general dynamic factor analysis model, including for instance dlm or KFAS.
Dynamic factor analysis vs state space model I did not see your question before. Yes, dynamic factor analysis can bee seen as a particular case of state-space model. It makes observations dependent of a small dimensional state vector (small rela
24,194
Mann-Whitney null hypothesis under unequal variance
The Mann-Whitney test is a special case of a permutation test (the distribution under the null is derived by looking at all the possible permutations of the data) and permutation tests have the null as identical distributions, so that is technically correct. One way of thinking of the Mann-Whitney test statistic is a measure of the number of times a randomly chosen value from one group exceeds a randomly chosen value from the other group. So the P(X>Y)=0.5 also makes sense and this is technically a property of the equal distributions null (assuming continuous distributions where the probability of a tie is 0). If the 2 distributions are the same then the probability of X being Greater than Y is 0.5 since they are both drawn from the same distribution. The stated case of 2 distributions having the same mean but widely different variances matches with the 2nd null hypothesis, but not the 1st of identical distributions. We can do some simulation to see what happens with the p-values in this case (in theory they should be uniformly distributed): > out <- replicate( 100000, wilcox.test( rnorm(25, 0, 2), rnorm(25,0,10) )$p.value ) > hist(out) > mean(out < 0.05) [1] 0.07991 > prop.test( sum(out<0.05), length(out), p=0.05 ) 1-sample proportions test with continuity correction data: sum(out < 0.05) out of length(out), null probability 0.05 X-squared = 1882.756, df = 1, p-value < 2.2e-16 alternative hypothesis: true p is not equal to 0.05 95 percent confidence interval: 0.07824054 0.08161183 sample estimates: p 0.07991 So clearly this is rejecting more often than it should and the null hypothesis is false (this matches equality of distributions, but not prob=0.5). Thinking in terms of probability of X > Y also runs into some interesting problems if you ever compare populations that are based on Efron's Dice.
Mann-Whitney null hypothesis under unequal variance
The Mann-Whitney test is a special case of a permutation test (the distribution under the null is derived by looking at all the possible permutations of the data) and permutation tests have the null a
Mann-Whitney null hypothesis under unequal variance The Mann-Whitney test is a special case of a permutation test (the distribution under the null is derived by looking at all the possible permutations of the data) and permutation tests have the null as identical distributions, so that is technically correct. One way of thinking of the Mann-Whitney test statistic is a measure of the number of times a randomly chosen value from one group exceeds a randomly chosen value from the other group. So the P(X>Y)=0.5 also makes sense and this is technically a property of the equal distributions null (assuming continuous distributions where the probability of a tie is 0). If the 2 distributions are the same then the probability of X being Greater than Y is 0.5 since they are both drawn from the same distribution. The stated case of 2 distributions having the same mean but widely different variances matches with the 2nd null hypothesis, but not the 1st of identical distributions. We can do some simulation to see what happens with the p-values in this case (in theory they should be uniformly distributed): > out <- replicate( 100000, wilcox.test( rnorm(25, 0, 2), rnorm(25,0,10) )$p.value ) > hist(out) > mean(out < 0.05) [1] 0.07991 > prop.test( sum(out<0.05), length(out), p=0.05 ) 1-sample proportions test with continuity correction data: sum(out < 0.05) out of length(out), null probability 0.05 X-squared = 1882.756, df = 1, p-value < 2.2e-16 alternative hypothesis: true p is not equal to 0.05 95 percent confidence interval: 0.07824054 0.08161183 sample estimates: p 0.07991 So clearly this is rejecting more often than it should and the null hypothesis is false (this matches equality of distributions, but not prob=0.5). Thinking in terms of probability of X > Y also runs into some interesting problems if you ever compare populations that are based on Efron's Dice.
Mann-Whitney null hypothesis under unequal variance The Mann-Whitney test is a special case of a permutation test (the distribution under the null is derived by looking at all the possible permutations of the data) and permutation tests have the null a
24,195
Mann-Whitney null hypothesis under unequal variance
Mann-Whitney isn't sensitive to changes in variance with equal mean, but it can - as you see with the $P(X>Y)=0.5$ form, detect differences that lead $P(X>Y)$ to deviate from $0.5$ (e.g. where both mean and variance increase together). Quite clearly if you had two normals with equal mean, their differences are symmetric about zero. Therefore $P(X>Y) = P(X-Y>0) = \frac{1}{2}$, which is the null situation. For example, if you have the distribution of $Y$ being exponential with mean $1$ while $X$ has an exponential distribution with mean $k$ (a scale change), the Mann-Whitney is sensitive to that (indeed, taking logs of both sides, its just a location-shift, and the Mann-Whitney is unaffected by monotonic transformation). -- If you're interested in tests which are conceptually very similar to the Mann-Whitney that are sensitive to differences in spread under equality of medians, there are several such tests. There's the Siegel-Tukey test and the Ansari-Bradley test, for example, both closely related to the Mann-Whitney-Wilcoxon two sample test. They are both based on the basic idea of ranking in from the ends. If you use R, the Ansari-Bradley test is built in ... ?ansari.test The Siegel-Tukey in effect just does a Mann-Whitney-Wilcoxon test on ranks computed from the sample differently; if you rank the data yourself, you don't really need a separate function for the p-values. Nevertheless, you can find some, as here: http://www.r-statistics.com/2010/02/siegel-tukey-a-non-parametric-test-for-equality-in-variability-r-code/ -- (in relation to ttnphns' comment under my original answer) You would be over-interpreting my response to read it as disagreeing with @GregSnow in any particularly substantive sense. There's certainly a difference in emphasis and to some extent in what we're talking about, but I'd be very surprised if there was much real disagreement behind it. Let's quote Mann and Whitney: "A statistic $U$ depending on the relative ranks of the $x$'s and $y$'s is proposed for testing the hypothesis $f=g$." That's unequivocal; it utterly supports @GregSnow's position. Now, let's see how the statistic is constructed: "Let $U$ count the number of times a $y$ precedes an $x$." Now if their null is true, the probability of that event is $\frac{1}{2}$ ... but there are other ways to get a probability of 0.5, and in that sense one might construe that the test can work in other circumstances. To the extent that they're estimating a (re-scaled) probability that $Y$>$X$, it supports what I said. However, for the significance levels to be guaranteed to be exactly correct, you'd need the distribution of $U$ to match the null distribution. That's derived on the assumption that all the permutations of the $X$ and $Y$ group-labels labels to the combined observations under the null were equally likely. This is certainly the case under $f=g$. Exactly as @GregSnow said. The question is the extent to which this is the case (i.e. that the distribution of the test statistic matches the one derived under the assumption that $f=g$, or approximately so), for the more generally expressed null. I believe that in many situations that it does; in particular for situations including but more general than the one you describe (two normal populations with the same mean but extremely unequal variance can be generalized quite a bit without altering the resulting distribution based on ranks), I believe that the distribution of the test statistic turns out to have the same distribution under which it was derived and so should be valid there. I did some simulations that seem to support this. However, it won't always be a very useful test (it may have poor power). I offer no proof that this is the case. I've applied some intuition/hand-wavy argument and also done some basic simulations that suggest it's true -- that the Mann-Whitney works (in that it has the 'right' distribution under the null) much more broadly than when $f=g$. Make of it what you will, but I don't construe this as substantive disagreement with @GregSnow Reference - Mann&Whitney's original paper
Mann-Whitney null hypothesis under unequal variance
Mann-Whitney isn't sensitive to changes in variance with equal mean, but it can - as you see with the $P(X>Y)=0.5$ form, detect differences that lead $P(X>Y)$ to deviate from $0.5$ (e.g. where both me
Mann-Whitney null hypothesis under unequal variance Mann-Whitney isn't sensitive to changes in variance with equal mean, but it can - as you see with the $P(X>Y)=0.5$ form, detect differences that lead $P(X>Y)$ to deviate from $0.5$ (e.g. where both mean and variance increase together). Quite clearly if you had two normals with equal mean, their differences are symmetric about zero. Therefore $P(X>Y) = P(X-Y>0) = \frac{1}{2}$, which is the null situation. For example, if you have the distribution of $Y$ being exponential with mean $1$ while $X$ has an exponential distribution with mean $k$ (a scale change), the Mann-Whitney is sensitive to that (indeed, taking logs of both sides, its just a location-shift, and the Mann-Whitney is unaffected by monotonic transformation). -- If you're interested in tests which are conceptually very similar to the Mann-Whitney that are sensitive to differences in spread under equality of medians, there are several such tests. There's the Siegel-Tukey test and the Ansari-Bradley test, for example, both closely related to the Mann-Whitney-Wilcoxon two sample test. They are both based on the basic idea of ranking in from the ends. If you use R, the Ansari-Bradley test is built in ... ?ansari.test The Siegel-Tukey in effect just does a Mann-Whitney-Wilcoxon test on ranks computed from the sample differently; if you rank the data yourself, you don't really need a separate function for the p-values. Nevertheless, you can find some, as here: http://www.r-statistics.com/2010/02/siegel-tukey-a-non-parametric-test-for-equality-in-variability-r-code/ -- (in relation to ttnphns' comment under my original answer) You would be over-interpreting my response to read it as disagreeing with @GregSnow in any particularly substantive sense. There's certainly a difference in emphasis and to some extent in what we're talking about, but I'd be very surprised if there was much real disagreement behind it. Let's quote Mann and Whitney: "A statistic $U$ depending on the relative ranks of the $x$'s and $y$'s is proposed for testing the hypothesis $f=g$." That's unequivocal; it utterly supports @GregSnow's position. Now, let's see how the statistic is constructed: "Let $U$ count the number of times a $y$ precedes an $x$." Now if their null is true, the probability of that event is $\frac{1}{2}$ ... but there are other ways to get a probability of 0.5, and in that sense one might construe that the test can work in other circumstances. To the extent that they're estimating a (re-scaled) probability that $Y$>$X$, it supports what I said. However, for the significance levels to be guaranteed to be exactly correct, you'd need the distribution of $U$ to match the null distribution. That's derived on the assumption that all the permutations of the $X$ and $Y$ group-labels labels to the combined observations under the null were equally likely. This is certainly the case under $f=g$. Exactly as @GregSnow said. The question is the extent to which this is the case (i.e. that the distribution of the test statistic matches the one derived under the assumption that $f=g$, or approximately so), for the more generally expressed null. I believe that in many situations that it does; in particular for situations including but more general than the one you describe (two normal populations with the same mean but extremely unequal variance can be generalized quite a bit without altering the resulting distribution based on ranks), I believe that the distribution of the test statistic turns out to have the same distribution under which it was derived and so should be valid there. I did some simulations that seem to support this. However, it won't always be a very useful test (it may have poor power). I offer no proof that this is the case. I've applied some intuition/hand-wavy argument and also done some basic simulations that suggest it's true -- that the Mann-Whitney works (in that it has the 'right' distribution under the null) much more broadly than when $f=g$. Make of it what you will, but I don't construe this as substantive disagreement with @GregSnow Reference - Mann&Whitney's original paper
Mann-Whitney null hypothesis under unequal variance Mann-Whitney isn't sensitive to changes in variance with equal mean, but it can - as you see with the $P(X>Y)=0.5$ form, detect differences that lead $P(X>Y)$ to deviate from $0.5$ (e.g. where both me
24,196
"the leading minor of order 1 is not positive definite" error using 2l.norm in mice
I have had a similar problem in MICE, see my self-discussion here. The problem occurs because you have overfitted your model (too many parameters, variables), some variables are highly colinear or you have cases that have missings on all variables. In my case the model was overfitted. One way to solve this issue is by adjusting the predictor matrix of MICE. You may give imp$pred where impis your mids object, to look at the predictor matrix. You can use new.pred <- quickpred(data) mice(..., pred=new.pred) to automatically generate a predictor matrix based on the bivariate correlations of the variables in the data (eg Pearson, Spearman), where .10 is the default cutoff. This may solve your problem. More generally build your models wisely and do not just include all variables you may have.
"the leading minor of order 1 is not positive definite" error using 2l.norm in mice
I have had a similar problem in MICE, see my self-discussion here. The problem occurs because you have overfitted your model (too many parameters, variables), some variables are highly colinear or you
"the leading minor of order 1 is not positive definite" error using 2l.norm in mice I have had a similar problem in MICE, see my self-discussion here. The problem occurs because you have overfitted your model (too many parameters, variables), some variables are highly colinear or you have cases that have missings on all variables. In my case the model was overfitted. One way to solve this issue is by adjusting the predictor matrix of MICE. You may give imp$pred where impis your mids object, to look at the predictor matrix. You can use new.pred <- quickpred(data) mice(..., pred=new.pred) to automatically generate a predictor matrix based on the bivariate correlations of the variables in the data (eg Pearson, Spearman), where .10 is the default cutoff. This may solve your problem. More generally build your models wisely and do not just include all variables you may have.
"the leading minor of order 1 is not positive definite" error using 2l.norm in mice I have had a similar problem in MICE, see my self-discussion here. The problem occurs because you have overfitted your model (too many parameters, variables), some variables are highly colinear or you
24,197
Dynamic Time Warping and normalization
No "general approach" exists for this at least to my knowledge. Besides you are trying to minimize a distance metric anyway. For example in the granddaddy of DTW papers Sakoe & Chiba (1978) use $|| a_i - b_i||$ as the measurement of difference between two feature vectors. As you correctly identified you need to have the same number of points (usually) for this to work out of the box. I would propose using a lowess() smoother/interpolator over your curves to make them of equal size first. It's pretty standard stuff for "curve statistics". You can see an example application in Chiou et al. (2003); the authors don't care about DTW as such in this work but it is a good exemplar how to deal with unequal sized readings. Additionally as you say "amplitude" is an issue. This is a bit more open ended to be honest. You can try an Area-Under-the-Curve approach like the one proposed by Zhang and Mueller (2011) to take care of this but really for the purposes of time warping even sup-norm normalization (ie. replace $f(x)$ with $\frac{f(x)}{sup_y|f(x)|}$ could do as in this paper by Tang and Mueller (2009). I would follow the second, but in any case as you also noticed normalization of samples is a necessity. Depending on the nature of your data you can find more application specific literature. I personally find the approach of minimizing with the respect to a target pairwise warping function $g$ the most intuitive of all. So the target function to minimize is: $C_\lambda(Y_i,Y_k, g) = E\{ \int_T (Y_i(g(t)) - Y_k(t))^2 + \lambda(g(t) -t)^2 dt| Y_i,Y_k\}$, where the whole thing despite it's uncanniness is actually quite straightforward: you try to find to find the warping function $g$ that minimizes the expected sum of the mismatch of the warped query curve $Y_i(g(t))$ to the reference curve $Y_k(t)$ (the term $ Y_i(g(t)) - Y_k(t) $) subject to some normalization to the time-distortion you impose by that warping (the term $g(t) -t$). This is what the MATLAB package PACE is implementing. I know that there exists an R package fda by J. O. Ramsay et al. that might be of help also but I have not personally used it (a bit annoyingly the standard reference for that package's methods is in many case Ramsay and Silverman's excellent book, Functional Data Analysis (2006) 2nd ed., and you have to scour a 400-page book to get what you look for; at least it's good read anyway) The problem you are describing in Statistics literature is widely known as "curve registration" (for example see Gasser and Kneip (1995) for an early treatment of the issue) and falls under the general umbrella of Functional Data Analysis techniques. (In cases I could find the original paper available on-line the link directs there; otherwise the link directs to a general digital library. Almost all the papers mentioned can be found to draft versions for free. I deleted my original comment as it is superseded by this post.)
Dynamic Time Warping and normalization
No "general approach" exists for this at least to my knowledge. Besides you are trying to minimize a distance metric anyway. For example in the granddaddy of DTW papers Sakoe & Chiba (1978) use $|| a_
Dynamic Time Warping and normalization No "general approach" exists for this at least to my knowledge. Besides you are trying to minimize a distance metric anyway. For example in the granddaddy of DTW papers Sakoe & Chiba (1978) use $|| a_i - b_i||$ as the measurement of difference between two feature vectors. As you correctly identified you need to have the same number of points (usually) for this to work out of the box. I would propose using a lowess() smoother/interpolator over your curves to make them of equal size first. It's pretty standard stuff for "curve statistics". You can see an example application in Chiou et al. (2003); the authors don't care about DTW as such in this work but it is a good exemplar how to deal with unequal sized readings. Additionally as you say "amplitude" is an issue. This is a bit more open ended to be honest. You can try an Area-Under-the-Curve approach like the one proposed by Zhang and Mueller (2011) to take care of this but really for the purposes of time warping even sup-norm normalization (ie. replace $f(x)$ with $\frac{f(x)}{sup_y|f(x)|}$ could do as in this paper by Tang and Mueller (2009). I would follow the second, but in any case as you also noticed normalization of samples is a necessity. Depending on the nature of your data you can find more application specific literature. I personally find the approach of minimizing with the respect to a target pairwise warping function $g$ the most intuitive of all. So the target function to minimize is: $C_\lambda(Y_i,Y_k, g) = E\{ \int_T (Y_i(g(t)) - Y_k(t))^2 + \lambda(g(t) -t)^2 dt| Y_i,Y_k\}$, where the whole thing despite it's uncanniness is actually quite straightforward: you try to find to find the warping function $g$ that minimizes the expected sum of the mismatch of the warped query curve $Y_i(g(t))$ to the reference curve $Y_k(t)$ (the term $ Y_i(g(t)) - Y_k(t) $) subject to some normalization to the time-distortion you impose by that warping (the term $g(t) -t$). This is what the MATLAB package PACE is implementing. I know that there exists an R package fda by J. O. Ramsay et al. that might be of help also but I have not personally used it (a bit annoyingly the standard reference for that package's methods is in many case Ramsay and Silverman's excellent book, Functional Data Analysis (2006) 2nd ed., and you have to scour a 400-page book to get what you look for; at least it's good read anyway) The problem you are describing in Statistics literature is widely known as "curve registration" (for example see Gasser and Kneip (1995) for an early treatment of the issue) and falls under the general umbrella of Functional Data Analysis techniques. (In cases I could find the original paper available on-line the link directs there; otherwise the link directs to a general digital library. Almost all the papers mentioned can be found to draft versions for free. I deleted my original comment as it is superseded by this post.)
Dynamic Time Warping and normalization No "general approach" exists for this at least to my knowledge. Besides you are trying to minimize a distance metric anyway. For example in the granddaddy of DTW papers Sakoe & Chiba (1978) use $|| a_
24,198
Is there a multivariate version of the Weibull distribution?
There are several in the literature. As for what makes one suitable for your purpose, that rather depends on the purpose. This book: Continuous Multivariate Distributions, Models and Applications By Samuel Kotz, N. Balakrishnan, Norman L. Johnson has some Multivariate Weibull models and is probably where I'd start. With the use of copulas, there will be an infinite number of multivariate Weibull distributions; copulas are effectively multivariate distributions with uniform margins. You convert to or from a corresponding multivariate distribution with arbitrary continuous margins by transforming the marginals. That way, general kinds of dependence structure can be accommodated.
Is there a multivariate version of the Weibull distribution?
There are several in the literature. As for what makes one suitable for your purpose, that rather depends on the purpose. This book: Continuous Multivariate Distributions, Models and Applications By
Is there a multivariate version of the Weibull distribution? There are several in the literature. As for what makes one suitable for your purpose, that rather depends on the purpose. This book: Continuous Multivariate Distributions, Models and Applications By Samuel Kotz, N. Balakrishnan, Norman L. Johnson has some Multivariate Weibull models and is probably where I'd start. With the use of copulas, there will be an infinite number of multivariate Weibull distributions; copulas are effectively multivariate distributions with uniform margins. You convert to or from a corresponding multivariate distribution with arbitrary continuous margins by transforming the marginals. That way, general kinds of dependence structure can be accommodated.
Is there a multivariate version of the Weibull distribution? There are several in the literature. As for what makes one suitable for your purpose, that rather depends on the purpose. This book: Continuous Multivariate Distributions, Models and Applications By
24,199
How to calculate "Paths to the White House" using R?
It is natural to use a recursive solution. The data must consist of a list of the states in play, their electoral votes, and the presumed starting advantage to the left ("blue") candidate. (A value of $47$ comes close to reproducing the NY Times graphic.) At each step, the two possibilities (left wins or loses) are examined; the advantage is updated; if at that point the outcome (win, loss, or tie) can be determined--based on the remaining votes--then the calculation halts; otherwise, it is repeated recursively for the remaining states in the list. Thus: paths.compute <- function(start, options, states) { if (start > sum(options)) x <- list(Id="O", width=1) else if (start < -sum(options)) x <- list(Id="R", width=1) else if (length(options) == 0 && start == 0) x <- list(Id="*", width=1) else { l <- paths.compute(start+options[1], options[-1], states[-1]) r <- paths.compute(start-options[1], options[-1], states[-1]) x <- list(Id=states[1], L=l, R=r, width=l$width+r$width, node=TRUE) } class(x) <- "path" return(x) } states <- c("FL", "OH", "NC", "VA", "WI", "CO", "IA", "NV", "NH") votes <- c(29, 18, 15, 13, 10, 9, 5, 6, 4) p <- paths.compute(47, votes, states) This effectively prunes the tree at each node, requiring much less computation than exploring all $2^9=512$ possible outcomes. The rest is just graphical detail, so I will discuss only those parts of the algorithm that are essential for an effective visualization. The full program follows. It is written in a moderately flexible manner to enable the user to adjust many of the parameters. The crucial part of the graphing algorithm is the tree layout. To do this, plot.path uses a width field to allocate proportionally the remaining horizontal space to the two descendants of each node. This field is calculated initially by paths.compute as the total number of leaves (descendants) beneath each node. (If some such calculation is not made, and the binary tree is simply split in half at each node, then by the ninth state there is only $1/512$ of the total width available for each leaf, which is far too narrow. Anybody who has started to draw a binary tree on paper has soon experienced this problem!) The vertical positions of the nodes are arranged in a geometric series (with common ratio a) so that the spacing gets closer in the deeper parts of the tree. The thicknesses of the branches and sizes of the leaf symbols are scaled by depth, too. (This will cause problems with the circular symbols at the leaves, because their aspect ratios will change as a varies. I haven't bothered to fix that up.) paths.compute <- function(start, options, states) { if (start > sum(options)) x <- list(Id="O", width=1) else if (start < -sum(options)) x <- list(Id="R", width=1) else if (length(options) == 0 && start == 0) x <- list(Id="*", width=1) else { l <- paths.compute(start+options[1], options[-1], states[-1]) r <- paths.compute(start-options[1], options[-1], states[-1]) x <- list(Id=states[1], L=l, R=r, width=l$width+r$width, node=TRUE) } class(x) <- "path" return(x) } plot.path <- function(p, depth=0, x0=1/2, y0=1, u=0, v=1, a=.9, delta=0, x.offset=0.01, thickness=12, size.leaf=4, decay=0.15, ...) { # # Graphical symbols # cyan <- rgb(.25, .5, .8, .5); cyan.full <- rgb(.625, .75, .9, 1) magenta <- rgb(1, .7, .775, .5); magenta.full <- rgb(1, .7, .775, 1) gray <- rgb(.95, .9, .4, 1) # # Graphical elements: circles and connectors. # circle <- function(center, radius, n.points=60) { z <- (1:n.points) * 2 * pi / n.points t(rbind(cos(z), sin(z)) * radius + center) } connect <- function(x1, x2, veer=0.45, n=15, ...){ x <- seq(x1[1], x1[2], length.out=5) y <- seq(x2[1], x2[2], length.out=5) y[2] = veer * y[3] + (1-veer) * y[2] y[4] = veer * y[3] + (1-veer) * y[4] s = spline(x, y, n) lines(s$x, s$y, ...) } # # Plot recursively: # scale <- exp(-decay * depth) if (is.null(p$node)) { if (p$Id=="O") {dx <- -y0; color <- cyan.full} else if (p$Id=="R") {dx <- y0; color <- magenta.full} else {dx = 0; color <- gray} polygon(circle(c(x0 + dx*x.offset, y0), size.leaf*scale/100), col=color, border=NA) text(x0 + dx*x.offset, y0, p$Id, cex=size.leaf*scale) } else { mid <- ((delta+p$L$width) * v + (delta+p$R$width) * u) / (p$L$width + p$R$width + 2*delta) connect(c(x0, (x0+u)/2), c(y0, y0 * a), lwd=thickness*scale, col=cyan, ...) connect(c(x0, (x0+v)/2), c(y0, y0 * a), lwd=thickness*scale, col=magenta, ...) plot(p$L, depth=depth+1, x0=(x0+u)/2, y0=y0*a, u, mid, a, delta, x.offset, thickness, size.leaf, decay, ...) plot(p$R, depth=depth+1, x0=(x0+v)/2, y0=y0*a, mid, v, a, delta, x.offset, thickness, size.leaf, decay, ...) } } plot.grid <- function(p, y0=1, a=.9, col.text="Gray", col.line="White", ...) { # # Plot horizontal lines and identifiers. # if (!is.null(p$node)) { abline(h=y0, col=col.line, ...) text(0.025, y0*1.0125, p$Id, cex=y0, col=col.text, ...) plot.grid(p$L, y0=y0*a, a, col.text, col.line, ...) plot.grid(p$R, y0=y0*a, a, col.text, col.line, ...) } } states <- c("FL", "OH", "NC", "VA", "WI", "CO", "IA", "NV", "NH") votes <- c(29, 18, 15, 13, 10, 9, 5, 6, 4) p <- paths.compute(47, votes, states) a <- 0.925 eps <- 1/26 y0 <- a^10; y1 <- 1.05 mai <- par("mai") par(bg="White", mai=c(eps, eps, eps, eps)) plot(c(0,1), c(a^10, 1.05), type="n", xaxt="n", yaxt="n", xlab="", ylab="") rect(-eps, y0 - eps * (y1 - y0), 1+eps, y1 + eps * (y1-y0), col="#f0f0f0", border=NA) plot.grid(p, y0=1, a=a, col="White", col.text="#888888") plot(p, a=a, delta=40, thickness=12, size.leaf=4, decay=0.2) par(mai=mai)
How to calculate "Paths to the White House" using R?
It is natural to use a recursive solution. The data must consist of a list of the states in play, their electoral votes, and the presumed starting advantage to the left ("blue") candidate. (A value o
How to calculate "Paths to the White House" using R? It is natural to use a recursive solution. The data must consist of a list of the states in play, their electoral votes, and the presumed starting advantage to the left ("blue") candidate. (A value of $47$ comes close to reproducing the NY Times graphic.) At each step, the two possibilities (left wins or loses) are examined; the advantage is updated; if at that point the outcome (win, loss, or tie) can be determined--based on the remaining votes--then the calculation halts; otherwise, it is repeated recursively for the remaining states in the list. Thus: paths.compute <- function(start, options, states) { if (start > sum(options)) x <- list(Id="O", width=1) else if (start < -sum(options)) x <- list(Id="R", width=1) else if (length(options) == 0 && start == 0) x <- list(Id="*", width=1) else { l <- paths.compute(start+options[1], options[-1], states[-1]) r <- paths.compute(start-options[1], options[-1], states[-1]) x <- list(Id=states[1], L=l, R=r, width=l$width+r$width, node=TRUE) } class(x) <- "path" return(x) } states <- c("FL", "OH", "NC", "VA", "WI", "CO", "IA", "NV", "NH") votes <- c(29, 18, 15, 13, 10, 9, 5, 6, 4) p <- paths.compute(47, votes, states) This effectively prunes the tree at each node, requiring much less computation than exploring all $2^9=512$ possible outcomes. The rest is just graphical detail, so I will discuss only those parts of the algorithm that are essential for an effective visualization. The full program follows. It is written in a moderately flexible manner to enable the user to adjust many of the parameters. The crucial part of the graphing algorithm is the tree layout. To do this, plot.path uses a width field to allocate proportionally the remaining horizontal space to the two descendants of each node. This field is calculated initially by paths.compute as the total number of leaves (descendants) beneath each node. (If some such calculation is not made, and the binary tree is simply split in half at each node, then by the ninth state there is only $1/512$ of the total width available for each leaf, which is far too narrow. Anybody who has started to draw a binary tree on paper has soon experienced this problem!) The vertical positions of the nodes are arranged in a geometric series (with common ratio a) so that the spacing gets closer in the deeper parts of the tree. The thicknesses of the branches and sizes of the leaf symbols are scaled by depth, too. (This will cause problems with the circular symbols at the leaves, because their aspect ratios will change as a varies. I haven't bothered to fix that up.) paths.compute <- function(start, options, states) { if (start > sum(options)) x <- list(Id="O", width=1) else if (start < -sum(options)) x <- list(Id="R", width=1) else if (length(options) == 0 && start == 0) x <- list(Id="*", width=1) else { l <- paths.compute(start+options[1], options[-1], states[-1]) r <- paths.compute(start-options[1], options[-1], states[-1]) x <- list(Id=states[1], L=l, R=r, width=l$width+r$width, node=TRUE) } class(x) <- "path" return(x) } plot.path <- function(p, depth=0, x0=1/2, y0=1, u=0, v=1, a=.9, delta=0, x.offset=0.01, thickness=12, size.leaf=4, decay=0.15, ...) { # # Graphical symbols # cyan <- rgb(.25, .5, .8, .5); cyan.full <- rgb(.625, .75, .9, 1) magenta <- rgb(1, .7, .775, .5); magenta.full <- rgb(1, .7, .775, 1) gray <- rgb(.95, .9, .4, 1) # # Graphical elements: circles and connectors. # circle <- function(center, radius, n.points=60) { z <- (1:n.points) * 2 * pi / n.points t(rbind(cos(z), sin(z)) * radius + center) } connect <- function(x1, x2, veer=0.45, n=15, ...){ x <- seq(x1[1], x1[2], length.out=5) y <- seq(x2[1], x2[2], length.out=5) y[2] = veer * y[3] + (1-veer) * y[2] y[4] = veer * y[3] + (1-veer) * y[4] s = spline(x, y, n) lines(s$x, s$y, ...) } # # Plot recursively: # scale <- exp(-decay * depth) if (is.null(p$node)) { if (p$Id=="O") {dx <- -y0; color <- cyan.full} else if (p$Id=="R") {dx <- y0; color <- magenta.full} else {dx = 0; color <- gray} polygon(circle(c(x0 + dx*x.offset, y0), size.leaf*scale/100), col=color, border=NA) text(x0 + dx*x.offset, y0, p$Id, cex=size.leaf*scale) } else { mid <- ((delta+p$L$width) * v + (delta+p$R$width) * u) / (p$L$width + p$R$width + 2*delta) connect(c(x0, (x0+u)/2), c(y0, y0 * a), lwd=thickness*scale, col=cyan, ...) connect(c(x0, (x0+v)/2), c(y0, y0 * a), lwd=thickness*scale, col=magenta, ...) plot(p$L, depth=depth+1, x0=(x0+u)/2, y0=y0*a, u, mid, a, delta, x.offset, thickness, size.leaf, decay, ...) plot(p$R, depth=depth+1, x0=(x0+v)/2, y0=y0*a, mid, v, a, delta, x.offset, thickness, size.leaf, decay, ...) } } plot.grid <- function(p, y0=1, a=.9, col.text="Gray", col.line="White", ...) { # # Plot horizontal lines and identifiers. # if (!is.null(p$node)) { abline(h=y0, col=col.line, ...) text(0.025, y0*1.0125, p$Id, cex=y0, col=col.text, ...) plot.grid(p$L, y0=y0*a, a, col.text, col.line, ...) plot.grid(p$R, y0=y0*a, a, col.text, col.line, ...) } } states <- c("FL", "OH", "NC", "VA", "WI", "CO", "IA", "NV", "NH") votes <- c(29, 18, 15, 13, 10, 9, 5, 6, 4) p <- paths.compute(47, votes, states) a <- 0.925 eps <- 1/26 y0 <- a^10; y1 <- 1.05 mai <- par("mai") par(bg="White", mai=c(eps, eps, eps, eps)) plot(c(0,1), c(a^10, 1.05), type="n", xaxt="n", yaxt="n", xlab="", ylab="") rect(-eps, y0 - eps * (y1 - y0), 1+eps, y1 + eps * (y1-y0), col="#f0f0f0", border=NA) plot.grid(p, y0=1, a=a, col="White", col.text="#888888") plot(p, a=a, delta=40, thickness=12, size.leaf=4, decay=0.2) par(mai=mai)
How to calculate "Paths to the White House" using R? It is natural to use a recursive solution. The data must consist of a list of the states in play, their electoral votes, and the presumed starting advantage to the left ("blue") candidate. (A value o
24,200
What is the distance between a finite Gaussian mixture and a Gaussian?
KL divergence would be natural because you have a natural base distribution, the single Gaussian, from which your mixture diverges. On the the other hand, the KL divergence (or its symmetrical 'distance' form) between two Gaussian mixtures, of which your problem is a special case, seems to be intractable in general. Hershey and Olson (2007) looks like a reasonable summary of the available approximations, including variational methods that may possibly offer easier bounds. However, if you want to have an argument about the ill-effects of assuming something is Gaussian when it's really a mixture then it's best to have a good idea about the consequences you're actually interested in - something more specific than simply 'being wrong' (this is @Michael-Chernick's point). For example, the consequences for a test, or an interval, or somesuch. Two obvious effects of the mixture are overdispersion, which is pretty much guaranteed, and multimodality, which will confuse maximizers.
What is the distance between a finite Gaussian mixture and a Gaussian?
KL divergence would be natural because you have a natural base distribution, the single Gaussian, from which your mixture diverges. On the the other hand, the KL divergence (or its symmetrical 'dista
What is the distance between a finite Gaussian mixture and a Gaussian? KL divergence would be natural because you have a natural base distribution, the single Gaussian, from which your mixture diverges. On the the other hand, the KL divergence (or its symmetrical 'distance' form) between two Gaussian mixtures, of which your problem is a special case, seems to be intractable in general. Hershey and Olson (2007) looks like a reasonable summary of the available approximations, including variational methods that may possibly offer easier bounds. However, if you want to have an argument about the ill-effects of assuming something is Gaussian when it's really a mixture then it's best to have a good idea about the consequences you're actually interested in - something more specific than simply 'being wrong' (this is @Michael-Chernick's point). For example, the consequences for a test, or an interval, or somesuch. Two obvious effects of the mixture are overdispersion, which is pretty much guaranteed, and multimodality, which will confuse maximizers.
What is the distance between a finite Gaussian mixture and a Gaussian? KL divergence would be natural because you have a natural base distribution, the single Gaussian, from which your mixture diverges. On the the other hand, the KL divergence (or its symmetrical 'dista