idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
8,001
Dimensionality reduction (SVD or PCA) on a large, sparse matrix
First of all, you really do want to center the data. If not, the geometric interpretation of PCA shows that the first principal component will be close to the vector of means and all subsequent PCs will be orthogonal to it, which will prevent them from approximating any PCs that happen to be close to that first vector. We can hope that most of the later PCs will be approximately correct, but the value of that is questionable when it's likely the first several PCs--the most important ones--will be quite wrong. So, what to do? PCA proceeds by means of a singular value decomposition of the matrix $X$. The essential information will be contained in $X X'$, which in this case is a $10000$ by $10000$ matrix: that may be manageable. Its computation involves about 50 million calculations of dot products of one column with the next. Consider any two columns, then, $Y$ and $Z$ (each one of them is a $500000$-vector; let this dimension be $n$). Let their means be $m_Y$ and $m_Z$, respectively. What you want to compute is, writing $\mathbf{1}$ for the $n$-vector of $1$'s, $$(Y - m_Y\mathbf{1}) \cdot (Z - m_Z\mathbf{1}) = Y\cdot Z - m_Z\mathbf{1}\cdot Y - m_Y\mathbf{1}.Z + m_Z m_Y \mathbf{1}\cdot \mathbf{1}\\ = Y\cdot Z -n (m_Ym_Z),$$ because $m_Y = \mathbf{1}\cdot Y / n$ and $m_Z = \mathbf{1}\cdot Z/n$. This allows you to use sparse matrix techniques to compute $X X'$, whose entries provide the values of $Y\cdot Z$, and then adjust its coefficients based on the $10000$ column means. The adjustment shouldn't hurt, because it seems unlikely $X X'$ will be very sparse. Example The following R code demonstrates this approach. It uses a stub, get.col, which in practice might read one column of $X$ at a time from an external data source, thereby reducing the amount of RAM required (at some cost in computation speed, of course). It computes PCA in two ways: via SVD applied to the preceding construction and directly using prcomp. It then compares the output of the two approaches. The computation time is about 50 seconds for 100 columns and scales approximately quadratically: be prepared to wait when performing SVD on a 10K by 10K matrix! m <- 500000 # Will be 500,000 n <- 100 # will be 10,000 library("Matrix") x <- as(matrix(pmax(0,rnorm(m*n, mean=-2)), nrow=m), "sparseMatrix") # # Compute centered version of x'x by having at most two columns # of x in memory at any time. # get.col <- function(i) x[,i] # Emulates reading a column system.time({ xt.x <- matrix(numeric(), n, n) x.means <- rep(numeric(), n) for (i in 1:n) { i.col <- get.col(i) x.means[i] <- mean(i.col) xt.x[i,i] <- sum(i.col * i.col) if (i < n) { for (j in (i+1):n) { j.col <- get.col(j) xt.x[i,j] <- xt.x[j,i] <- sum(j.col * i.col) } } } xt.x <- (xt.x - m * outer(x.means, x.means, `*`)) / (m-1) svd.0 <- svd(xt.x / m) } ) system.time(pca <- prcomp(x, center=TRUE)) # # Checks: all should be essentially zero. # max(abs(pca$center - x.means)) max(abs(xt.x - cov(x))) max(abs(abs(svd.0$v / pca$rotation) - 1)) # (This is an unstable calculation.)
Dimensionality reduction (SVD or PCA) on a large, sparse matrix
First of all, you really do want to center the data. If not, the geometric interpretation of PCA shows that the first principal component will be close to the vector of means and all subsequent PCs wi
Dimensionality reduction (SVD or PCA) on a large, sparse matrix First of all, you really do want to center the data. If not, the geometric interpretation of PCA shows that the first principal component will be close to the vector of means and all subsequent PCs will be orthogonal to it, which will prevent them from approximating any PCs that happen to be close to that first vector. We can hope that most of the later PCs will be approximately correct, but the value of that is questionable when it's likely the first several PCs--the most important ones--will be quite wrong. So, what to do? PCA proceeds by means of a singular value decomposition of the matrix $X$. The essential information will be contained in $X X'$, which in this case is a $10000$ by $10000$ matrix: that may be manageable. Its computation involves about 50 million calculations of dot products of one column with the next. Consider any two columns, then, $Y$ and $Z$ (each one of them is a $500000$-vector; let this dimension be $n$). Let their means be $m_Y$ and $m_Z$, respectively. What you want to compute is, writing $\mathbf{1}$ for the $n$-vector of $1$'s, $$(Y - m_Y\mathbf{1}) \cdot (Z - m_Z\mathbf{1}) = Y\cdot Z - m_Z\mathbf{1}\cdot Y - m_Y\mathbf{1}.Z + m_Z m_Y \mathbf{1}\cdot \mathbf{1}\\ = Y\cdot Z -n (m_Ym_Z),$$ because $m_Y = \mathbf{1}\cdot Y / n$ and $m_Z = \mathbf{1}\cdot Z/n$. This allows you to use sparse matrix techniques to compute $X X'$, whose entries provide the values of $Y\cdot Z$, and then adjust its coefficients based on the $10000$ column means. The adjustment shouldn't hurt, because it seems unlikely $X X'$ will be very sparse. Example The following R code demonstrates this approach. It uses a stub, get.col, which in practice might read one column of $X$ at a time from an external data source, thereby reducing the amount of RAM required (at some cost in computation speed, of course). It computes PCA in two ways: via SVD applied to the preceding construction and directly using prcomp. It then compares the output of the two approaches. The computation time is about 50 seconds for 100 columns and scales approximately quadratically: be prepared to wait when performing SVD on a 10K by 10K matrix! m <- 500000 # Will be 500,000 n <- 100 # will be 10,000 library("Matrix") x <- as(matrix(pmax(0,rnorm(m*n, mean=-2)), nrow=m), "sparseMatrix") # # Compute centered version of x'x by having at most two columns # of x in memory at any time. # get.col <- function(i) x[,i] # Emulates reading a column system.time({ xt.x <- matrix(numeric(), n, n) x.means <- rep(numeric(), n) for (i in 1:n) { i.col <- get.col(i) x.means[i] <- mean(i.col) xt.x[i,i] <- sum(i.col * i.col) if (i < n) { for (j in (i+1):n) { j.col <- get.col(j) xt.x[i,j] <- xt.x[j,i] <- sum(j.col * i.col) } } } xt.x <- (xt.x - m * outer(x.means, x.means, `*`)) / (m-1) svd.0 <- svd(xt.x / m) } ) system.time(pca <- prcomp(x, center=TRUE)) # # Checks: all should be essentially zero. # max(abs(pca$center - x.means)) max(abs(xt.x - cov(x))) max(abs(abs(svd.0$v / pca$rotation) - 1)) # (This is an unstable calculation.)
Dimensionality reduction (SVD or PCA) on a large, sparse matrix First of all, you really do want to center the data. If not, the geometric interpretation of PCA shows that the first principal component will be close to the vector of means and all subsequent PCs wi
8,002
What stop-criteria for agglomerative hierarchical clustering are used in practice?
The following Wikipedia entry actually does a pretty good job of explaining the most popular and relatively simple methods: Determining the number of clusters in a data set The Elbow Method heuristic described there is probably the most popular due to its simple explanation (amount of variance explained by number of clusters) coupled with the visual check. The information theoretic method isn't hard to implement either and the page has some pseudocode you could use to start. The latter is analagous to a penalized likelihood based on model complexity as in the well known information criteria such as AIC, BIC, etc.
What stop-criteria for agglomerative hierarchical clustering are used in practice?
The following Wikipedia entry actually does a pretty good job of explaining the most popular and relatively simple methods: Determining the number of clusters in a data set The Elbow Method heuristi
What stop-criteria for agglomerative hierarchical clustering are used in practice? The following Wikipedia entry actually does a pretty good job of explaining the most popular and relatively simple methods: Determining the number of clusters in a data set The Elbow Method heuristic described there is probably the most popular due to its simple explanation (amount of variance explained by number of clusters) coupled with the visual check. The information theoretic method isn't hard to implement either and the page has some pseudocode you could use to start. The latter is analagous to a penalized likelihood based on model complexity as in the well known information criteria such as AIC, BIC, etc.
What stop-criteria for agglomerative hierarchical clustering are used in practice? The following Wikipedia entry actually does a pretty good job of explaining the most popular and relatively simple methods: Determining the number of clusters in a data set The Elbow Method heuristi
8,003
What stop-criteria for agglomerative hierarchical clustering are used in practice?
It is rather difficult to provide a clear-cut solution about how to choose the "best" number of clusters in your data, whatever the clustering method you use, because Cluster Analysis seeks to isolate groups of statistical units (whether it be individuals or variables) for exploratory or descriptive purpose, essentially. Hence, you also have to interpret the output of your clustering scheme and several cluster solutions may be equally interesting. Now, regarding usual statistical criteria used to decide when to stop to aggregate data, as pointed by @ars most are visual-guided criteria, including the analysis of the dendrogram or the inspection of clusters profiles, also called silhouette plots (Rousseeuw, 1987). Several numerical criteria, also known as validity indices, were also proposed, e.g. Dunn’s validity index, Davies-Bouldin valid- ity index, C index, Hubert’s gamma, to name a few. Hierarchical clustering is often run together with k-means (in fact, several instances of k-means since it is a stochastic algorithm), so that it add support to the clustering solutions found. I don't know if all of this stuff is readily available in Python, but a huge amount of methods is available in R (see the Cluster task view, already cited by @mbq for a related question, What tools could be used for applying clustering algorithms on MovieLens?). Other approaches include fuzzy clustering and model-based clustering (also called latent trait analysis, in the psychometric community) if you seek more robust way to choose the number of clusters in your data. BTW, I just came across this webpage, scipy-cluster, which is an extension to Scipy for generating, visualizing, and analyzing hierarchical clusters. Maybe it includes other functionalities? I've also heard of PyChem which offers pretty good stuff for multivariate analysis. The following reference may also be helpful: Steinley, D., & Brusco, M. J. (2008). Selection of variables in cluster analysis: An empirical comparison of eight procedures. Psychometrika, 73, 125-144.
What stop-criteria for agglomerative hierarchical clustering are used in practice?
It is rather difficult to provide a clear-cut solution about how to choose the "best" number of clusters in your data, whatever the clustering method you use, because Cluster Analysis seeks to isolate
What stop-criteria for agglomerative hierarchical clustering are used in practice? It is rather difficult to provide a clear-cut solution about how to choose the "best" number of clusters in your data, whatever the clustering method you use, because Cluster Analysis seeks to isolate groups of statistical units (whether it be individuals or variables) for exploratory or descriptive purpose, essentially. Hence, you also have to interpret the output of your clustering scheme and several cluster solutions may be equally interesting. Now, regarding usual statistical criteria used to decide when to stop to aggregate data, as pointed by @ars most are visual-guided criteria, including the analysis of the dendrogram or the inspection of clusters profiles, also called silhouette plots (Rousseeuw, 1987). Several numerical criteria, also known as validity indices, were also proposed, e.g. Dunn’s validity index, Davies-Bouldin valid- ity index, C index, Hubert’s gamma, to name a few. Hierarchical clustering is often run together with k-means (in fact, several instances of k-means since it is a stochastic algorithm), so that it add support to the clustering solutions found. I don't know if all of this stuff is readily available in Python, but a huge amount of methods is available in R (see the Cluster task view, already cited by @mbq for a related question, What tools could be used for applying clustering algorithms on MovieLens?). Other approaches include fuzzy clustering and model-based clustering (also called latent trait analysis, in the psychometric community) if you seek more robust way to choose the number of clusters in your data. BTW, I just came across this webpage, scipy-cluster, which is an extension to Scipy for generating, visualizing, and analyzing hierarchical clusters. Maybe it includes other functionalities? I've also heard of PyChem which offers pretty good stuff for multivariate analysis. The following reference may also be helpful: Steinley, D., & Brusco, M. J. (2008). Selection of variables in cluster analysis: An empirical comparison of eight procedures. Psychometrika, 73, 125-144.
What stop-criteria for agglomerative hierarchical clustering are used in practice? It is rather difficult to provide a clear-cut solution about how to choose the "best" number of clusters in your data, whatever the clustering method you use, because Cluster Analysis seeks to isolate
8,004
What stop-criteria for agglomerative hierarchical clustering are used in practice?
I recently became fund of the clustergram visualization method (implemented in R). I use it for an extra method to assess a "good" number of clusters. Extending it to other clustering methods is not so hard (I actually did it, just didn't get to publish the code)
What stop-criteria for agglomerative hierarchical clustering are used in practice?
I recently became fund of the clustergram visualization method (implemented in R). I use it for an extra method to assess a "good" number of clusters. Extending it to other clustering methods is not
What stop-criteria for agglomerative hierarchical clustering are used in practice? I recently became fund of the clustergram visualization method (implemented in R). I use it for an extra method to assess a "good" number of clusters. Extending it to other clustering methods is not so hard (I actually did it, just didn't get to publish the code)
What stop-criteria for agglomerative hierarchical clustering are used in practice? I recently became fund of the clustergram visualization method (implemented in R). I use it for an extra method to assess a "good" number of clusters. Extending it to other clustering methods is not
8,005
Real-life examples of common distributions
Wikipedia has a page that lists many probability distributions with links to more detail about each distribution. You can look through the list and follow the links to get a better feel for the types of applications that the different distributions are commonly used for. Just remember that these distributions are used to model reality and as Box said: "all models are wrong, some models are useful". Here are some of the common distributions and some of the reasons that they are useful: Normal: This is useful for looking at means and other linear combinations (e.g. regression coefficients) because of the CLT. Related to that is if something is known to arise due to additive effects of many different small causes then the normal may be a reasonable distribution: for example, many biological measures are the result of multiple genes and multiple environmental factors and therefor are often approximately normal. Gamma: Right skewed and useful for things with a natural minimum at 0. Commonly used for elapsed times and some financial variables. Exponential: special case of the Gamma. It is memoryless and scales easily. Chi-squared ($\chi^2$): special case of the Gamma. Arises as sum of squared normal variables (so used for variances). Beta: Defined between 0 and 1 (but could be transformed to be between other values), useful for proportions or other quantities that must be between 0 and 1. Binomial: How many "successes" out of a given number of independent trials with same probability of "success". Poisson: Common for counts. Nice properties that if the number of events in a period of time or area follows a Poisson, then the number in twice the time or area still follows the Poisson (with twice the mean): this works for adding Poissons or scaling with values other than 2. Note that if events occur over time and the time between occurrences follows an exponential then the number that occur in a time period follows a Poisson. Negative Binomial: Counts with minimum 0 (or other value depending on which version) and no upper bound. Conceptually it is the number of "failures" before k "successes". The negative binomial is also a mixture of Poisson variables whose means come from a gamma distribution. Geometric: special case for negative binomial where it is the number of "failures" before the 1st "success". If you truncate (round down) an exponential variable to make it discrete, the result is geometric.
Real-life examples of common distributions
Wikipedia has a page that lists many probability distributions with links to more detail about each distribution. You can look through the list and follow the links to get a better feel for the types
Real-life examples of common distributions Wikipedia has a page that lists many probability distributions with links to more detail about each distribution. You can look through the list and follow the links to get a better feel for the types of applications that the different distributions are commonly used for. Just remember that these distributions are used to model reality and as Box said: "all models are wrong, some models are useful". Here are some of the common distributions and some of the reasons that they are useful: Normal: This is useful for looking at means and other linear combinations (e.g. regression coefficients) because of the CLT. Related to that is if something is known to arise due to additive effects of many different small causes then the normal may be a reasonable distribution: for example, many biological measures are the result of multiple genes and multiple environmental factors and therefor are often approximately normal. Gamma: Right skewed and useful for things with a natural minimum at 0. Commonly used for elapsed times and some financial variables. Exponential: special case of the Gamma. It is memoryless and scales easily. Chi-squared ($\chi^2$): special case of the Gamma. Arises as sum of squared normal variables (so used for variances). Beta: Defined between 0 and 1 (but could be transformed to be between other values), useful for proportions or other quantities that must be between 0 and 1. Binomial: How many "successes" out of a given number of independent trials with same probability of "success". Poisson: Common for counts. Nice properties that if the number of events in a period of time or area follows a Poisson, then the number in twice the time or area still follows the Poisson (with twice the mean): this works for adding Poissons or scaling with values other than 2. Note that if events occur over time and the time between occurrences follows an exponential then the number that occur in a time period follows a Poisson. Negative Binomial: Counts with minimum 0 (or other value depending on which version) and no upper bound. Conceptually it is the number of "failures" before k "successes". The negative binomial is also a mixture of Poisson variables whose means come from a gamma distribution. Geometric: special case for negative binomial where it is the number of "failures" before the 1st "success". If you truncate (round down) an exponential variable to make it discrete, the result is geometric.
Real-life examples of common distributions Wikipedia has a page that lists many probability distributions with links to more detail about each distribution. You can look through the list and follow the links to get a better feel for the types
8,006
Real-life examples of common distributions
Buy and read at least the first 6 chapters (first 218 pages) of William J. Feller "An Introduction to Probability Theory and Its Applications, Vol. 2" http://www.amazon.com/dp/0471257095/ref=rdr_ext_tmb . At least read all of the Problems for Solution, and preferably try solving as many as you can. You don't need to have read Vol 1, which in my opinion is not particularly meritorious. Despite the author having died 45 1/2 years ago, before the book was even finished, this is simply the finest book there is, bar none, for developing an intuition in probability and stochastic processes, and understanding and developing a feel for various distributions, how they relate to real world phenomena, and various stochastic phenomena which can and do occur. And with the solid foundation you will build from it, you will be well served in statistics. If you can make it though subsequent chapters, which gets somewhat more difficult, you will be light years ahead of almost everyone. Simply put, if you know Feller Vol 2, you know probability (and stochastic processes); meaning that, anything you don't know, such as new developments, you will be able to quickly pick up and master by building on that solid foundation. Almost everything previously mentioned in this thread is in Feller Vol 2 (not all of the material in Kendall Advanced Theory of Statistics, but reading that book will be a piece of cake after Feller Vol 2), and more, much more, all of it in a way which should develop your stochastic thinking and intuition. Johnson and Kotz is good for minutiae on various probability distributions, Feller Vol 2 is useful for learning how to think probabilistically, and knowing what to extract from Johnson and Kotz and how to use it.
Real-life examples of common distributions
Buy and read at least the first 6 chapters (first 218 pages) of William J. Feller "An Introduction to Probability Theory and Its Applications, Vol. 2" http://www.amazon.com/dp/0471257095/ref=rdr_ext_t
Real-life examples of common distributions Buy and read at least the first 6 chapters (first 218 pages) of William J. Feller "An Introduction to Probability Theory and Its Applications, Vol. 2" http://www.amazon.com/dp/0471257095/ref=rdr_ext_tmb . At least read all of the Problems for Solution, and preferably try solving as many as you can. You don't need to have read Vol 1, which in my opinion is not particularly meritorious. Despite the author having died 45 1/2 years ago, before the book was even finished, this is simply the finest book there is, bar none, for developing an intuition in probability and stochastic processes, and understanding and developing a feel for various distributions, how they relate to real world phenomena, and various stochastic phenomena which can and do occur. And with the solid foundation you will build from it, you will be well served in statistics. If you can make it though subsequent chapters, which gets somewhat more difficult, you will be light years ahead of almost everyone. Simply put, if you know Feller Vol 2, you know probability (and stochastic processes); meaning that, anything you don't know, such as new developments, you will be able to quickly pick up and master by building on that solid foundation. Almost everything previously mentioned in this thread is in Feller Vol 2 (not all of the material in Kendall Advanced Theory of Statistics, but reading that book will be a piece of cake after Feller Vol 2), and more, much more, all of it in a way which should develop your stochastic thinking and intuition. Johnson and Kotz is good for minutiae on various probability distributions, Feller Vol 2 is useful for learning how to think probabilistically, and knowing what to extract from Johnson and Kotz and how to use it.
Real-life examples of common distributions Buy and read at least the first 6 chapters (first 218 pages) of William J. Feller "An Introduction to Probability Theory and Its Applications, Vol. 2" http://www.amazon.com/dp/0471257095/ref=rdr_ext_t
8,007
Real-life examples of common distributions
Some common probability distributions; From here Uniform distribution (discrete) - You rolled 1 die and the probability of falling any of 1, 2, 3, 4, 5 and 6 is equal. (from here) Uniform distribution (continuous) - You sprayed some very fine powder towards an wall. For a small area on wall, the chances of falling dust on a spot on the wall is uniform. You have a big cylinder of gas. For any unit area, number of gas molecules hitting per square cm on the inner wall per second, is seemingly to be uniform. from here Bernoulli distribution - Bernoulli trial is (or binomial trial) is a random experiment with exactly two possible outcomes, "success" and "failure". In such a trial, probability of success is p, probability of failure is q=1-p. For example, in a coin toss, we can have 2 outcome- head or tail. For a fair coin, probability of head is 1/2; probability of tail is 1/2 it is one kind of Bernoulli distribution which is also uniform. In a coin toss if the coin is unfair such as probability of getting head is 0.9 then the probability of falling a tail will be 0.1. Bernauli Distribution with probabilities 0.6 and 0.4; from here Binomial distribution - If a Bernoulli trial (with 2 outcome, respectively with probabilities p and q=1-p) is run for n times; (such as if a coin is tossed for n times); there will be a little probability of getting all head, and there would be a little probability of getting all tails. A certain value of head and a certain value of tail would be maximal. This distribution is being called a binomial distribution. Binomial distribution with checkerboard. image modified from WP Poisson's distribution - example from Wikipedia: an individual keeping track of the amount of mail they receive each day may notice that they receive an average number of 4 letters per day. If mails are from independent source, then the number of pieces of mail received in a day obeys a Poisson distribution. i.e. there will be negligible chance for getting zero or 100 mail per day but a maximum of certain number (here 4) mail per day. Similarly; suppose in an imaginary meadow e get around 10 pebbles in 1 km^2. With proportionally more area we get proportionally more pebbles. But for a certain 1 km^2 sample it is very unlikely to get 0 or 100 pebbles. probably it follows a Poisson's distribution. According to Wikipedia, the number of decay events per second from a radioactive source, follows a Poisson's distribution. Poisson's distribution from Wikipedia Normal distribution or Gaussian distribution - if n number of dies rolled simultaneously, and given that n is very big; the sum of outcome of each dies would tend to be clustered around a central value. Not too big, not too small. This distribution is being called a normal distribution or bell shaped curve. Sum of 2 dies, from here With increasing number of simultaneous dies, the distribution approaches Gaussian. From central limit theorem Similarly, if n number of coins tossed simultaneously, and n is very large, there would be a little chance we will get to many heads or too many tails. The number of heads will centre around a certain value. That is similar to binomial distribution but the number of coin is even larger.
Real-life examples of common distributions
Some common probability distributions; From here Uniform distribution (discrete) - You rolled 1 die and the probability of falling any of 1, 2, 3, 4, 5 and 6 is equal. (from here) Uniform distributio
Real-life examples of common distributions Some common probability distributions; From here Uniform distribution (discrete) - You rolled 1 die and the probability of falling any of 1, 2, 3, 4, 5 and 6 is equal. (from here) Uniform distribution (continuous) - You sprayed some very fine powder towards an wall. For a small area on wall, the chances of falling dust on a spot on the wall is uniform. You have a big cylinder of gas. For any unit area, number of gas molecules hitting per square cm on the inner wall per second, is seemingly to be uniform. from here Bernoulli distribution - Bernoulli trial is (or binomial trial) is a random experiment with exactly two possible outcomes, "success" and "failure". In such a trial, probability of success is p, probability of failure is q=1-p. For example, in a coin toss, we can have 2 outcome- head or tail. For a fair coin, probability of head is 1/2; probability of tail is 1/2 it is one kind of Bernoulli distribution which is also uniform. In a coin toss if the coin is unfair such as probability of getting head is 0.9 then the probability of falling a tail will be 0.1. Bernauli Distribution with probabilities 0.6 and 0.4; from here Binomial distribution - If a Bernoulli trial (with 2 outcome, respectively with probabilities p and q=1-p) is run for n times; (such as if a coin is tossed for n times); there will be a little probability of getting all head, and there would be a little probability of getting all tails. A certain value of head and a certain value of tail would be maximal. This distribution is being called a binomial distribution. Binomial distribution with checkerboard. image modified from WP Poisson's distribution - example from Wikipedia: an individual keeping track of the amount of mail they receive each day may notice that they receive an average number of 4 letters per day. If mails are from independent source, then the number of pieces of mail received in a day obeys a Poisson distribution. i.e. there will be negligible chance for getting zero or 100 mail per day but a maximum of certain number (here 4) mail per day. Similarly; suppose in an imaginary meadow e get around 10 pebbles in 1 km^2. With proportionally more area we get proportionally more pebbles. But for a certain 1 km^2 sample it is very unlikely to get 0 or 100 pebbles. probably it follows a Poisson's distribution. According to Wikipedia, the number of decay events per second from a radioactive source, follows a Poisson's distribution. Poisson's distribution from Wikipedia Normal distribution or Gaussian distribution - if n number of dies rolled simultaneously, and given that n is very big; the sum of outcome of each dies would tend to be clustered around a central value. Not too big, not too small. This distribution is being called a normal distribution or bell shaped curve. Sum of 2 dies, from here With increasing number of simultaneous dies, the distribution approaches Gaussian. From central limit theorem Similarly, if n number of coins tossed simultaneously, and n is very large, there would be a little chance we will get to many heads or too many tails. The number of heads will centre around a certain value. That is similar to binomial distribution but the number of coin is even larger.
Real-life examples of common distributions Some common probability distributions; From here Uniform distribution (discrete) - You rolled 1 die and the probability of falling any of 1, 2, 3, 4, 5 and 6 is equal. (from here) Uniform distributio
8,008
Real-life examples of common distributions
Asymptotic theory leads to the normal distribution, the extreme value types, the stable laws and the Poisson. The exponential and the Weibull tend to come up as parametric time to event distributions. In the case of the Weibull it is an extreme value type for the minimum of a sample. Related to the parametric models for normally distributed observations the chi square, t and F distributions arise in hypothesis testing and confidence interval estimation.The chi square also come up in contingency table analysis and goodness of fit tests. For studying power of tests we have the noncentral t and F distributions. The hypergeometric distribution arises in Fisher's exact test for contingency tables. The binomial distribution is important when doing experiments to estimate proportions. The negative binomial is an important distribution to model overdispersion in a point process. That should give you a good start on pratical parametric distrbutions. For nonnegative random variables on (0, ∞) the Gamma distribution is flexible for providing a variety of shapes and the log normal is also commonly used. On [0,1] the beta family provides symmetric distirbutions including the uniform as well as distributions skewed left or skewed right. I should also mention that if you want to know all the nitty gritty details about distributions in statistics there are the classic series of books by Johnson and Kotz that include discrete distributions, continuous univariate distributions and continuous multivariate distributions and also volume 1 of the Advanced Theory of Statistics by Kendall and Stuart.
Real-life examples of common distributions
Asymptotic theory leads to the normal distribution, the extreme value types, the stable laws and the Poisson. The exponential and the Weibull tend to come up as parametric time to event distributions
Real-life examples of common distributions Asymptotic theory leads to the normal distribution, the extreme value types, the stable laws and the Poisson. The exponential and the Weibull tend to come up as parametric time to event distributions. In the case of the Weibull it is an extreme value type for the minimum of a sample. Related to the parametric models for normally distributed observations the chi square, t and F distributions arise in hypothesis testing and confidence interval estimation.The chi square also come up in contingency table analysis and goodness of fit tests. For studying power of tests we have the noncentral t and F distributions. The hypergeometric distribution arises in Fisher's exact test for contingency tables. The binomial distribution is important when doing experiments to estimate proportions. The negative binomial is an important distribution to model overdispersion in a point process. That should give you a good start on pratical parametric distrbutions. For nonnegative random variables on (0, ∞) the Gamma distribution is flexible for providing a variety of shapes and the log normal is also commonly used. On [0,1] the beta family provides symmetric distirbutions including the uniform as well as distributions skewed left or skewed right. I should also mention that if you want to know all the nitty gritty details about distributions in statistics there are the classic series of books by Johnson and Kotz that include discrete distributions, continuous univariate distributions and continuous multivariate distributions and also volume 1 of the Advanced Theory of Statistics by Kendall and Stuart.
Real-life examples of common distributions Asymptotic theory leads to the normal distribution, the extreme value types, the stable laws and the Poisson. The exponential and the Weibull tend to come up as parametric time to event distributions
8,009
Real-life examples of common distributions
Just to add to the other excellent answers. The Poisson distribution is useful whenever we have counting variables, as others have mentioned. But much more should be said! The poisson arises asymptotically from a binomially distributed variable, when $n$ (the number of Bernoulli experiments) increases without bounds, and $p$ (the success probability of each individual experiment() goes to zero, in such a way that $\lambda=n p$ stays constant, bounded away from zero and infinity. This tells us that it is useful whenever we have a large number of individually very improbable events. Some good examples are: accidents , such as number of car crashes in New York in a day, since each time two cars passes/meets there are a very low probability of a crash, and the number of such opportunities is indeed astronomical! Now you yourself can think about other examples, such as total number of plane crashes in the world in a year. The classic example where the number of deaths by horsekicks in the Preussian cavalry! When the Poisson is used in epidemiology, for modelling number of cases of some sickness, one often finds it does not fit well: The variance is too large! The Poisson has variance=mean, which can be seen easily from the limit of binomial: In the binomial the variance is $n p (1-p)$, and when $p$ goes to zero necessarily $1-p$ goes to one, so the variance goes to $np$, which is the expectation, and those both go to $\lambda$. One way is to search for an alternative to the Poisson with larger variance, not conditioned to equal the mean, such as the negative binomial. ¿But why do this phenomenon of larger variance, occur? One possibility is that the individual probabilities of sickness $p$ for one person, are not constant, and neither depends on some observed covariate (say age, occupation, smoking status, ...) That is called unobserved heterogeneity, and sometimes models used for is is called frailty models, or mixed models. One way of doing this is assuming the $p$'s in the population comes from some distribution, and assuming that is a gamma distribution, for instance (which makes for simpler maths...), we get the gamma-poisson distribution --- which recovers the negative binomial!
Real-life examples of common distributions
Just to add to the other excellent answers. The Poisson distribution is useful whenever we have counting variables, as others have mentioned. But much more should be said! The poisson arises asymptoti
Real-life examples of common distributions Just to add to the other excellent answers. The Poisson distribution is useful whenever we have counting variables, as others have mentioned. But much more should be said! The poisson arises asymptotically from a binomially distributed variable, when $n$ (the number of Bernoulli experiments) increases without bounds, and $p$ (the success probability of each individual experiment() goes to zero, in such a way that $\lambda=n p$ stays constant, bounded away from zero and infinity. This tells us that it is useful whenever we have a large number of individually very improbable events. Some good examples are: accidents , such as number of car crashes in New York in a day, since each time two cars passes/meets there are a very low probability of a crash, and the number of such opportunities is indeed astronomical! Now you yourself can think about other examples, such as total number of plane crashes in the world in a year. The classic example where the number of deaths by horsekicks in the Preussian cavalry! When the Poisson is used in epidemiology, for modelling number of cases of some sickness, one often finds it does not fit well: The variance is too large! The Poisson has variance=mean, which can be seen easily from the limit of binomial: In the binomial the variance is $n p (1-p)$, and when $p$ goes to zero necessarily $1-p$ goes to one, so the variance goes to $np$, which is the expectation, and those both go to $\lambda$. One way is to search for an alternative to the Poisson with larger variance, not conditioned to equal the mean, such as the negative binomial. ¿But why do this phenomenon of larger variance, occur? One possibility is that the individual probabilities of sickness $p$ for one person, are not constant, and neither depends on some observed covariate (say age, occupation, smoking status, ...) That is called unobserved heterogeneity, and sometimes models used for is is called frailty models, or mixed models. One way of doing this is assuming the $p$'s in the population comes from some distribution, and assuming that is a gamma distribution, for instance (which makes for simpler maths...), we get the gamma-poisson distribution --- which recovers the negative binomial!
Real-life examples of common distributions Just to add to the other excellent answers. The Poisson distribution is useful whenever we have counting variables, as others have mentioned. But much more should be said! The poisson arises asymptoti
8,010
Real-life examples of common distributions
Recently published research suggests that human performance is NOT normally distributed, contrary to common thought. Data from four fields were analyzed: (1) Academics in 50 disciplines, based on publishing frequency in the most pre-eminent discipline-specific journals. (2) Entertainers, such as actors, musicians and writers, and the number of prestigious awards, nominations or distinctions received. (3) Politicians in 10 nations and election/re-election results. (4) Collegiate and professional athletes looking at the most individualized measures available, such as the number of home runs, receptions in team sports and total wins in individual sports. The author writes, "We saw a clear and consistent power-law distribution unfold in each study, regardless of how narrowly or broadly we analyzed the data..."
Real-life examples of common distributions
Recently published research suggests that human performance is NOT normally distributed, contrary to common thought. Data from four fields were analyzed: (1) Academics in 50 disciplines, based on pu
Real-life examples of common distributions Recently published research suggests that human performance is NOT normally distributed, contrary to common thought. Data from four fields were analyzed: (1) Academics in 50 disciplines, based on publishing frequency in the most pre-eminent discipline-specific journals. (2) Entertainers, such as actors, musicians and writers, and the number of prestigious awards, nominations or distinctions received. (3) Politicians in 10 nations and election/re-election results. (4) Collegiate and professional athletes looking at the most individualized measures available, such as the number of home runs, receptions in team sports and total wins in individual sports. The author writes, "We saw a clear and consistent power-law distribution unfold in each study, regardless of how narrowly or broadly we analyzed the data..."
Real-life examples of common distributions Recently published research suggests that human performance is NOT normally distributed, contrary to common thought. Data from four fields were analyzed: (1) Academics in 50 disciplines, based on pu
8,011
Real-life examples of common distributions
Cauchy distribution is often used in finance to model asset returns. Also noteworthy are Johnson’s Bounded and Unbounded distributions due to their flexibility (I’ve applied them in modeling asset prices, electricity generation and hydrology).
Real-life examples of common distributions
Cauchy distribution is often used in finance to model asset returns. Also noteworthy are Johnson’s Bounded and Unbounded distributions due to their flexibility (I’ve applied them in modeling asset pri
Real-life examples of common distributions Cauchy distribution is often used in finance to model asset returns. Also noteworthy are Johnson’s Bounded and Unbounded distributions due to their flexibility (I’ve applied them in modeling asset prices, electricity generation and hydrology).
Real-life examples of common distributions Cauchy distribution is often used in finance to model asset returns. Also noteworthy are Johnson’s Bounded and Unbounded distributions due to their flexibility (I’ve applied them in modeling asset pri
8,012
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011
My personal appraisal of his arguments: Here he talks about using $p$ as evidence for the Null, whereas his thesis is that $p$ can't be used as evidence against the Null. So, I think this argument is largely irrelevant. I think this is a misunderstanding. Fisherian $p$ testing follows strongly in the idea of Popper's Critical Rationalism that states you cannot support a theory but only criticize it. So in that sense there only is a single hypothesis (the Null) and you simply check if your data are in accordance with it. I disagree here. It depends on the test statistic but $p$ is usually a transformation of an effect size that speaks against the Null. So the higher the effect, the lower the p value---all other things equal. Of course, for different data sets or hypotheses this is no longer valid. I am not sure I completely understand this statement, but from what I can gather this is less a problem of $p$ as of people using it wrongly. $p$ was intended to have the long-run frequency interpretation and that is a feature not a bug. But you can't blame $p$ for people taking a single $p$ value as proof for their hypothesis or people publishing only $p<.05$. His suggestion of using the likelihood ratio as a measure of evidence is in my opinion a good one (but here the idea of a Bayes factor is more general), but in the context in which he brings it is a bit peculiar: First he leaves the grounds of Fisherian testing where there is no alternative hypothesis to calculate the likelihood ratio from. But $p$ as evidence against the Null is Fisherian. Hence he confounds Fisher and Neyman-Pearson. Second, most test statistics that we use are (functions of) the likelihood ratio and in that case $p$ is a transformation of the likelihood ratio. As Cosma Shalizi puts it: among all tests of a given size $s$ , the one with the smallest miss probability, or highest power, has the form "say 'signal' if $q(x)/p(x) > t(s)$, otherwise say 'noise'," and that the threshold $t$ varies inversely with $s$. The quantity $q(x)/p(x)$ is the likelihood ratio; the Neyman-Pearson lemma says that to maximize power, we should say "signal" if it is sufficiently more likely than noise. Here $q(x)$ is the density under state "signal" and $p(x)$ the density under state "noise". The measure for "sufficiently likely" would here be $P(q(X)/p(x) > t_{obs} \mid H_0)$ which is $p$. Note that in correct Neyman-Pearson testing $t_{obs}$ is substituted by a fixed $t(s)$ such that $P(q(X)/p(x) > t(s) \mid H_0)=\alpha$.
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011
My personal appraisal of his arguments: Here he talks about using $p$ as evidence for the Null, whereas his thesis is that $p$ can't be used as evidence against the Null. So, I think this argument is
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011 My personal appraisal of his arguments: Here he talks about using $p$ as evidence for the Null, whereas his thesis is that $p$ can't be used as evidence against the Null. So, I think this argument is largely irrelevant. I think this is a misunderstanding. Fisherian $p$ testing follows strongly in the idea of Popper's Critical Rationalism that states you cannot support a theory but only criticize it. So in that sense there only is a single hypothesis (the Null) and you simply check if your data are in accordance with it. I disagree here. It depends on the test statistic but $p$ is usually a transformation of an effect size that speaks against the Null. So the higher the effect, the lower the p value---all other things equal. Of course, for different data sets or hypotheses this is no longer valid. I am not sure I completely understand this statement, but from what I can gather this is less a problem of $p$ as of people using it wrongly. $p$ was intended to have the long-run frequency interpretation and that is a feature not a bug. But you can't blame $p$ for people taking a single $p$ value as proof for their hypothesis or people publishing only $p<.05$. His suggestion of using the likelihood ratio as a measure of evidence is in my opinion a good one (but here the idea of a Bayes factor is more general), but in the context in which he brings it is a bit peculiar: First he leaves the grounds of Fisherian testing where there is no alternative hypothesis to calculate the likelihood ratio from. But $p$ as evidence against the Null is Fisherian. Hence he confounds Fisher and Neyman-Pearson. Second, most test statistics that we use are (functions of) the likelihood ratio and in that case $p$ is a transformation of the likelihood ratio. As Cosma Shalizi puts it: among all tests of a given size $s$ , the one with the smallest miss probability, or highest power, has the form "say 'signal' if $q(x)/p(x) > t(s)$, otherwise say 'noise'," and that the threshold $t$ varies inversely with $s$. The quantity $q(x)/p(x)$ is the likelihood ratio; the Neyman-Pearson lemma says that to maximize power, we should say "signal" if it is sufficiently more likely than noise. Here $q(x)$ is the density under state "signal" and $p(x)$ the density under state "noise". The measure for "sufficiently likely" would here be $P(q(X)/p(x) > t_{obs} \mid H_0)$ which is $p$. Note that in correct Neyman-Pearson testing $t_{obs}$ is substituted by a fixed $t(s)$ such that $P(q(X)/p(x) > t(s) \mid H_0)=\alpha$.
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011 My personal appraisal of his arguments: Here he talks about using $p$ as evidence for the Null, whereas his thesis is that $p$ can't be used as evidence against the Null. So, I think this argument is
8,013
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011
The reason that arguments like Johansson's are recycled so often seem to be related to the fact that P-values are indices of the evidence against the null but are not measures of the evidence. The evidence has more dimensions than any single number can measure, and so there are always aspects of the relationship between P-values and evidence that people can find difficult. I have reviewed many of the arguments used by Johansson in a paper that shows the relationship between P-values and likelihood functions, and thus evidence: http://arxiv.org/abs/1311.0081 Unfortunately that paper has now been three times rejected, although its arguments and the evidence for them have not been refuted. (It seems that it is distasteful to referees who hold opinions like Johansson's rather than wrong.)
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011
The reason that arguments like Johansson's are recycled so often seem to be related to the fact that P-values are indices of the evidence against the null but are not measures of the evidence. The evi
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011 The reason that arguments like Johansson's are recycled so often seem to be related to the fact that P-values are indices of the evidence against the null but are not measures of the evidence. The evidence has more dimensions than any single number can measure, and so there are always aspects of the relationship between P-values and evidence that people can find difficult. I have reviewed many of the arguments used by Johansson in a paper that shows the relationship between P-values and likelihood functions, and thus evidence: http://arxiv.org/abs/1311.0081 Unfortunately that paper has now been three times rejected, although its arguments and the evidence for them have not been refuted. (It seems that it is distasteful to referees who hold opinions like Johansson's rather than wrong.)
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011 The reason that arguments like Johansson's are recycled so often seem to be related to the fact that P-values are indices of the evidence against the null but are not measures of the evidence. The evi
8,014
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011
Adding to @Momo's nice answer: Do not forget multiplicity. Given many independent p-values, and sparse non-trivial effect sizes, the smallest p-values are from the null, with probability tending to $1$ as the number of hypotheses increases. So if you tell me you have a small p-value, the first thing I want to know is how many hypotheses you have been testing.
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011
Adding to @Momo's nice answer: Do not forget multiplicity. Given many independent p-values, and sparse non-trivial effect sizes, the smallest p-values are from the null, with probability tending to $1
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011 Adding to @Momo's nice answer: Do not forget multiplicity. Given many independent p-values, and sparse non-trivial effect sizes, the smallest p-values are from the null, with probability tending to $1$ as the number of hypotheses increases. So if you tell me you have a small p-value, the first thing I want to know is how many hypotheses you have been testing.
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011 Adding to @Momo's nice answer: Do not forget multiplicity. Given many independent p-values, and sparse non-trivial effect sizes, the smallest p-values are from the null, with probability tending to $1
8,015
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011
Is Johansson talking about p-values from two different experiments? If so, comparing p-values may be like comparing apples to lamb chops. If experiment "A" involves a huge number of samples, even a small inconsequential difference may be statistically significant. If experiment "B" involves only a few samples, an important difference may be statistically insignificant. Even worse (that's why I said lamb chops and not oranges), the scales may be totally incomparable (psi in one and kwh in the other).
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011
Is Johansson talking about p-values from two different experiments? If so, comparing p-values may be like comparing apples to lamb chops. If experiment "A" involves a huge number of samples, even a
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011 Is Johansson talking about p-values from two different experiments? If so, comparing p-values may be like comparing apples to lamb chops. If experiment "A" involves a huge number of samples, even a small inconsequential difference may be statistically significant. If experiment "B" involves only a few samples, an important difference may be statistically insignificant. Even worse (that's why I said lamb chops and not oranges), the scales may be totally incomparable (psi in one and kwh in the other).
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011 Is Johansson talking about p-values from two different experiments? If so, comparing p-values may be like comparing apples to lamb chops. If experiment "A" involves a huge number of samples, even a
8,016
Algorithm to dynamically monitor quantiles
The P2 algorithm is a nice find. It works by making several estimates of the quantile, updating them periodically, and using quadratic (not linear, not cubic) interpolation to estimate the quantile. The authors claim quadratic interpolation works better in the tails than linear interpolation and cubic would get too fussy and difficult. You do not state exactly how this approach fails for your "heavy-tailed" data, but it's easy to guess: estimates of extreme quantiles for heavy-tailed distributions will be unstable until a large amount of data are collected. But this is going to be a problem (to a lesser extent) even if you were to store all the data, so don't expect miracles! At any rate, why not set auxiliary markers--let's call them $x_0$ and $x_6$--within which you are highly certain the quantile will lie, and store all data that lie between $x_0$ and $x_6$? When your buffer fills you will have to update these markers, always keeping $x_0 \le x_6$. A simple algorithm to do this can be devised from a combination of (a) the current P2 estimate of the quantile and (b) stored counts of the number of data less than $x_0$ and the number of data greater than $x_6$. In this fashion you can, with high certainty, estimate the quantile just as well as if you had the entire dataset always available, but you only need a relatively small buffer. Specifically, I am proposing a data structure $(k, \mathbf{y}, n)$ to maintain partial information about a sequence of $n$ data values $x_1, x_2, \ldots, x_n$. Here, $\mathbf{y}$ is a linked list $$\mathbf{y} = (x^{(n)}_{[k+1]} \le x^{(n)}_{[k+2]} \le \cdots \le x^{(n)}_{[k+m]}).$$ In this notation $x^{(n)}_{[i]}$ denotes the $i^\text{th}$ smallest of the $n$ $x$ values read so far. $m$ is a constant, the size of the buffer $\mathbf{y}$. The algorithm begins by filling $\mathbf{y}$ with the first $m$ data values encountered and placing them in sorted order, smallest to largest. Let $q$ be the quantile to be estimated; e.g., $q$ = 0.99. Upon reading $x_{n+1}$ there are three possible actions: If $x_{n+1} \lt x^{(n)}_{[k+1]}$, increment $k$. If $x_{n+1} \gt x^{(n)}_{[k+m]}$, do nothing. Otherwise, insert $x_{n+1}$ into $\mathbf{y}$. In any event, increment $n$. The insert procedure puts $x_{n+1}$ into $\mathbf{y}$ in sorted order and then eliminates one of the extreme values in $\mathbf{y}$: If $k + m/2 \lt n q$, then remove $x^{(n)}_{[k+1]}$ from $\mathbf{y}$ and increment $k$; Otherwise, remove $x^{(n)}_{[k+m]}$ from $\mathbf{y}$. Provided $m$ is sufficiently large, this procedure will bracket the true quantile of the distribution with high probability. At any stage $n$ it can be estimated in the usual way in terms of $x^{(n)}_{[\lfloor{q n}\rfloor]}$ and $x^{(n)}_{[\lceil{q n}\rceil]}$, which will likely lie in $\mathbf{y}$. (I believe $m$ only has to scale like the square root of the maximum amount of data ($N$), but I have not carried out a rigorous analysis to prove that.) At any rate, the algorithm will detect whether it has succeeded (by comparing $k/n$ and $(k+m)/n$ to $q$). Testing with up to 100,000 values, using $m = 2\sqrt{N}$ and $q=.5$ (the most difficult case) indicates this algorithm has a 99.5% success rate in obtaining the correct value of $x^{(n)}_{[\lfloor{q n}\rfloor]}$. For a stream of $N=10^{12}$ values, that would require a buffer of only two million (but three or four million would be a better choice). Using a sorted doubly linked list for the buffer requires $O(\log(\sqrt{N}))$ = $O(\log(N))$ effort while identifying and deleting the max or min are $O(1)$ operations. The relatively expensive insertion typically needs to be done only $O(\sqrt{N})$ times. Thus the computational costs of this algorithm are $O(N + \sqrt{N} \log(N)) = O(N)$ in time and $O(\sqrt{N})$ in storage.
Algorithm to dynamically monitor quantiles
The P2 algorithm is a nice find. It works by making several estimates of the quantile, updating them periodically, and using quadratic (not linear, not cubic) interpolation to estimate the quantile.
Algorithm to dynamically monitor quantiles The P2 algorithm is a nice find. It works by making several estimates of the quantile, updating them periodically, and using quadratic (not linear, not cubic) interpolation to estimate the quantile. The authors claim quadratic interpolation works better in the tails than linear interpolation and cubic would get too fussy and difficult. You do not state exactly how this approach fails for your "heavy-tailed" data, but it's easy to guess: estimates of extreme quantiles for heavy-tailed distributions will be unstable until a large amount of data are collected. But this is going to be a problem (to a lesser extent) even if you were to store all the data, so don't expect miracles! At any rate, why not set auxiliary markers--let's call them $x_0$ and $x_6$--within which you are highly certain the quantile will lie, and store all data that lie between $x_0$ and $x_6$? When your buffer fills you will have to update these markers, always keeping $x_0 \le x_6$. A simple algorithm to do this can be devised from a combination of (a) the current P2 estimate of the quantile and (b) stored counts of the number of data less than $x_0$ and the number of data greater than $x_6$. In this fashion you can, with high certainty, estimate the quantile just as well as if you had the entire dataset always available, but you only need a relatively small buffer. Specifically, I am proposing a data structure $(k, \mathbf{y}, n)$ to maintain partial information about a sequence of $n$ data values $x_1, x_2, \ldots, x_n$. Here, $\mathbf{y}$ is a linked list $$\mathbf{y} = (x^{(n)}_{[k+1]} \le x^{(n)}_{[k+2]} \le \cdots \le x^{(n)}_{[k+m]}).$$ In this notation $x^{(n)}_{[i]}$ denotes the $i^\text{th}$ smallest of the $n$ $x$ values read so far. $m$ is a constant, the size of the buffer $\mathbf{y}$. The algorithm begins by filling $\mathbf{y}$ with the first $m$ data values encountered and placing them in sorted order, smallest to largest. Let $q$ be the quantile to be estimated; e.g., $q$ = 0.99. Upon reading $x_{n+1}$ there are three possible actions: If $x_{n+1} \lt x^{(n)}_{[k+1]}$, increment $k$. If $x_{n+1} \gt x^{(n)}_{[k+m]}$, do nothing. Otherwise, insert $x_{n+1}$ into $\mathbf{y}$. In any event, increment $n$. The insert procedure puts $x_{n+1}$ into $\mathbf{y}$ in sorted order and then eliminates one of the extreme values in $\mathbf{y}$: If $k + m/2 \lt n q$, then remove $x^{(n)}_{[k+1]}$ from $\mathbf{y}$ and increment $k$; Otherwise, remove $x^{(n)}_{[k+m]}$ from $\mathbf{y}$. Provided $m$ is sufficiently large, this procedure will bracket the true quantile of the distribution with high probability. At any stage $n$ it can be estimated in the usual way in terms of $x^{(n)}_{[\lfloor{q n}\rfloor]}$ and $x^{(n)}_{[\lceil{q n}\rceil]}$, which will likely lie in $\mathbf{y}$. (I believe $m$ only has to scale like the square root of the maximum amount of data ($N$), but I have not carried out a rigorous analysis to prove that.) At any rate, the algorithm will detect whether it has succeeded (by comparing $k/n$ and $(k+m)/n$ to $q$). Testing with up to 100,000 values, using $m = 2\sqrt{N}$ and $q=.5$ (the most difficult case) indicates this algorithm has a 99.5% success rate in obtaining the correct value of $x^{(n)}_{[\lfloor{q n}\rfloor]}$. For a stream of $N=10^{12}$ values, that would require a buffer of only two million (but three or four million would be a better choice). Using a sorted doubly linked list for the buffer requires $O(\log(\sqrt{N}))$ = $O(\log(N))$ effort while identifying and deleting the max or min are $O(1)$ operations. The relatively expensive insertion typically needs to be done only $O(\sqrt{N})$ times. Thus the computational costs of this algorithm are $O(N + \sqrt{N} \log(N)) = O(N)$ in time and $O(\sqrt{N})$ in storage.
Algorithm to dynamically monitor quantiles The P2 algorithm is a nice find. It works by making several estimates of the quantile, updating them periodically, and using quadratic (not linear, not cubic) interpolation to estimate the quantile.
8,017
Algorithm to dynamically monitor quantiles
I think whuber's suggestion is great and I would try that first. However, if you find you really can't accomodate the $O(\sqrt N)$ storage or it doesn't work out for some other reason, here is an idea for a different generalization of P2. It's not as detailed as what whuber suggests - more like a research idea instead of as a solution. Instead of tracking the quantiles at $0$, $p/2$, $p$, $(1+p)/2$, and $1$, as the original P2 algorithm suggests, you could simply keep track of more quantiles (but still a constant number). It looks like the algorithm allows for that in a very straightforward manner; all you need to do is compute the correct "bucket" for incoming points, and the right way to update the quantiles (quadratically using adjacent numbers). Say you keep track of $25$ points. You could try tracking the quantile at $0$, $p/12$, $\dotsc$, $p \cdot 11/12$, $p$, $p + (1-p)/12$, $\dotsc$, $p + 11\cdot(1-p)/12$, $1$ (picking the points equidistantly in between $0$ and $p$, and between $p$ and $1$), or even using $22$ Chebyshev nodes of the form $p/2 \cdot (1 + \cos \frac{(2 i - 1)\pi}{22})$ and $p + (1 - p)/2 \cdot (1 + \cos \frac{(2i-1)\pi}{22})$. If $p$ is close to $0$ or $1$, you could try putting fewer points on the side where there is less probability mass and more on the other side. If you decide to pursue this, I (and possibly others on this site) would be interested in knowing if it works...
Algorithm to dynamically monitor quantiles
I think whuber's suggestion is great and I would try that first. However, if you find you really can't accomodate the $O(\sqrt N)$ storage or it doesn't work out for some other reason, here is an idea
Algorithm to dynamically monitor quantiles I think whuber's suggestion is great and I would try that first. However, if you find you really can't accomodate the $O(\sqrt N)$ storage or it doesn't work out for some other reason, here is an idea for a different generalization of P2. It's not as detailed as what whuber suggests - more like a research idea instead of as a solution. Instead of tracking the quantiles at $0$, $p/2$, $p$, $(1+p)/2$, and $1$, as the original P2 algorithm suggests, you could simply keep track of more quantiles (but still a constant number). It looks like the algorithm allows for that in a very straightforward manner; all you need to do is compute the correct "bucket" for incoming points, and the right way to update the quantiles (quadratically using adjacent numbers). Say you keep track of $25$ points. You could try tracking the quantile at $0$, $p/12$, $\dotsc$, $p \cdot 11/12$, $p$, $p + (1-p)/12$, $\dotsc$, $p + 11\cdot(1-p)/12$, $1$ (picking the points equidistantly in between $0$ and $p$, and between $p$ and $1$), or even using $22$ Chebyshev nodes of the form $p/2 \cdot (1 + \cos \frac{(2 i - 1)\pi}{22})$ and $p + (1 - p)/2 \cdot (1 + \cos \frac{(2i-1)\pi}{22})$. If $p$ is close to $0$ or $1$, you could try putting fewer points on the side where there is less probability mass and more on the other side. If you decide to pursue this, I (and possibly others on this site) would be interested in knowing if it works...
Algorithm to dynamically monitor quantiles I think whuber's suggestion is great and I would try that first. However, if you find you really can't accomodate the $O(\sqrt N)$ storage or it doesn't work out for some other reason, here is an idea
8,018
Algorithm to dynamically monitor quantiles
Press et al., Numerical Recipes 8.5.2 "Single-pass estimation of arbitrary quantiles" p. 435, give a c++ class IQAgent which updates a piecewise-linear approximate cdf.
Algorithm to dynamically monitor quantiles
Press et al., Numerical Recipes 8.5.2 "Single-pass estimation of arbitrary quantiles" p. 435, give a c++ class IQAgent which updates a piecewise-linear approximate cdf.
Algorithm to dynamically monitor quantiles Press et al., Numerical Recipes 8.5.2 "Single-pass estimation of arbitrary quantiles" p. 435, give a c++ class IQAgent which updates a piecewise-linear approximate cdf.
Algorithm to dynamically monitor quantiles Press et al., Numerical Recipes 8.5.2 "Single-pass estimation of arbitrary quantiles" p. 435, give a c++ class IQAgent which updates a piecewise-linear approximate cdf.
8,019
Algorithm to dynamically monitor quantiles
I'd look at quantile regression. You can use it to determine a parametric estimate of whichever quantiles you want to look at. It make no assumption regarding normality, so it handles heteroskedasticity pretty well and can be used one a rolling window basis. It's basically an L1-Norm penalized regression, so it's not too numerically intensive and there's a pretty full featured R, SAS, and SPSS packages plus a few matlab implementations out there. Here's the main and the R package wikis for more info. Edited: Check out the math stack exchange crosslink: Someone sited a couple of papers that essentially lay out the very simple idea of just using a rolling window of order statistics to estimate quantiles. Literally all you have to do is sort the values from smallest to largest, select which quantile you want, and select the highest value within that quantile. You can obviously give more weight to the most recent observations if you believe they are more representative of actual current conditions. This will probably give rough estimates, but it's fairly simple to do and you don't have to go through the motions of quantitative heavy lifting. Just a thought.
Algorithm to dynamically monitor quantiles
I'd look at quantile regression. You can use it to determine a parametric estimate of whichever quantiles you want to look at. It make no assumption regarding normality, so it handles heteroskedastici
Algorithm to dynamically monitor quantiles I'd look at quantile regression. You can use it to determine a parametric estimate of whichever quantiles you want to look at. It make no assumption regarding normality, so it handles heteroskedasticity pretty well and can be used one a rolling window basis. It's basically an L1-Norm penalized regression, so it's not too numerically intensive and there's a pretty full featured R, SAS, and SPSS packages plus a few matlab implementations out there. Here's the main and the R package wikis for more info. Edited: Check out the math stack exchange crosslink: Someone sited a couple of papers that essentially lay out the very simple idea of just using a rolling window of order statistics to estimate quantiles. Literally all you have to do is sort the values from smallest to largest, select which quantile you want, and select the highest value within that quantile. You can obviously give more weight to the most recent observations if you believe they are more representative of actual current conditions. This will probably give rough estimates, but it's fairly simple to do and you don't have to go through the motions of quantitative heavy lifting. Just a thought.
Algorithm to dynamically monitor quantiles I'd look at quantile regression. You can use it to determine a parametric estimate of whichever quantiles you want to look at. It make no assumption regarding normality, so it handles heteroskedastici
8,020
Algorithm to dynamically monitor quantiles
This can be adapted from algorithms that determine the median of a dataset online. For more information, see this stackoverflow post - https://stackoverflow.com/questions/1387497/find-median-value-from-a-growing-set
Algorithm to dynamically monitor quantiles
This can be adapted from algorithms that determine the median of a dataset online. For more information, see this stackoverflow post - https://stackoverflow.com/questions/1387497/find-median-value-fr
Algorithm to dynamically monitor quantiles This can be adapted from algorithms that determine the median of a dataset online. For more information, see this stackoverflow post - https://stackoverflow.com/questions/1387497/find-median-value-from-a-growing-set
Algorithm to dynamically monitor quantiles This can be adapted from algorithms that determine the median of a dataset online. For more information, see this stackoverflow post - https://stackoverflow.com/questions/1387497/find-median-value-fr
8,021
Algorithm to dynamically monitor quantiles
It is possible to estimate (and track) quantiles on an on-line basis (the same applies to the parameters of a quantile regression). In essence, this boils down to stochastic gradient descent on the check-loss function which defines quantile-regression (quantiles being represented by a model containing only an intercept), e.g. updating the unknown parameters as and when observations arrive. See the Bell Labs paper "Incremental Quantile Estimation for Massive Tracking" ( ftp://ftp.cse.buffalo.edu/users/azhang/disc/disc01/cd1/out/papers/kdd/p516-chen.pdf)
Algorithm to dynamically monitor quantiles
It is possible to estimate (and track) quantiles on an on-line basis (the same applies to the parameters of a quantile regression). In essence, this boils down to stochastic gradient descent on the c
Algorithm to dynamically monitor quantiles It is possible to estimate (and track) quantiles on an on-line basis (the same applies to the parameters of a quantile regression). In essence, this boils down to stochastic gradient descent on the check-loss function which defines quantile-regression (quantiles being represented by a model containing only an intercept), e.g. updating the unknown parameters as and when observations arrive. See the Bell Labs paper "Incremental Quantile Estimation for Massive Tracking" ( ftp://ftp.cse.buffalo.edu/users/azhang/disc/disc01/cd1/out/papers/kdd/p516-chen.pdf)
Algorithm to dynamically monitor quantiles It is possible to estimate (and track) quantiles on an on-line basis (the same applies to the parameters of a quantile regression). In essence, this boils down to stochastic gradient descent on the c
8,022
What is the standard error of the sample standard deviation?
Let $\mu_4 = E(X-\mu)^4$. Then, the formula for the SE of $s^2$ is: $$ se(s^2) = \sqrt{ \frac{1}{n}\left(\mu_4 -\frac{n-3}{n-1} \sigma^4\right)} $$ This is an exact formula, valid for any sample size and distribution, and is proved on page 438, of Rao, 1973, assuming that the $\mu_4$ is finite. The formula you gave in your question applies only to Normally distributed data. Let $\hat{\theta} = s^2$. You want to find the SE of $ g(\hat{\theta})$, where $g(u) = \sqrt{u}$. There is no general exact formula for this standard error, as @Alecos Papadopoulos pointed out. However, one can drive an approximate (large sample) standard error by means of the delta method. (See Wikipedia entry for "delta method"). Here's how Rao, 1973, 6.a.2.4 put it. I include the absolute value indicators, which he incorrectly omitted. $$ se(g(\hat{\theta})) \approx |g'(\hat\theta)|\times se(\hat{\theta}) $$ where $g'$ is the first derivative. Now for the square root function $g$ $$ g'(u) = \frac{1}{2\thinspace u^{1/2}} $$ So: $$ se(s)\approx \frac{1}{2 \sigma} se(s^2) $$ In practice I would estimate the standard error by the bootstrap or jackknife. Reference: CR Rao (1973) Linear Statistical Inference and its Applications 2nd Ed, John Wiley & Sons, NY
What is the standard error of the sample standard deviation?
Let $\mu_4 = E(X-\mu)^4$. Then, the formula for the SE of $s^2$ is: $$ se(s^2) = \sqrt{ \frac{1}{n}\left(\mu_4 -\frac{n-3}{n-1} \sigma^4\right)} $$ This is an exact formula, valid for any sample size
What is the standard error of the sample standard deviation? Let $\mu_4 = E(X-\mu)^4$. Then, the formula for the SE of $s^2$ is: $$ se(s^2) = \sqrt{ \frac{1}{n}\left(\mu_4 -\frac{n-3}{n-1} \sigma^4\right)} $$ This is an exact formula, valid for any sample size and distribution, and is proved on page 438, of Rao, 1973, assuming that the $\mu_4$ is finite. The formula you gave in your question applies only to Normally distributed data. Let $\hat{\theta} = s^2$. You want to find the SE of $ g(\hat{\theta})$, where $g(u) = \sqrt{u}$. There is no general exact formula for this standard error, as @Alecos Papadopoulos pointed out. However, one can drive an approximate (large sample) standard error by means of the delta method. (See Wikipedia entry for "delta method"). Here's how Rao, 1973, 6.a.2.4 put it. I include the absolute value indicators, which he incorrectly omitted. $$ se(g(\hat{\theta})) \approx |g'(\hat\theta)|\times se(\hat{\theta}) $$ where $g'$ is the first derivative. Now for the square root function $g$ $$ g'(u) = \frac{1}{2\thinspace u^{1/2}} $$ So: $$ se(s)\approx \frac{1}{2 \sigma} se(s^2) $$ In practice I would estimate the standard error by the bootstrap or jackknife. Reference: CR Rao (1973) Linear Statistical Inference and its Applications 2nd Ed, John Wiley & Sons, NY
What is the standard error of the sample standard deviation? Let $\mu_4 = E(X-\mu)^4$. Then, the formula for the SE of $s^2$ is: $$ se(s^2) = \sqrt{ \frac{1}{n}\left(\mu_4 -\frac{n-3}{n-1} \sigma^4\right)} $$ This is an exact formula, valid for any sample size
8,023
How to train and validate a neural network model in R?
Max Kuhn's caret Manual - Model Building is a great starting point. I would think of the validation stage as occurring within the caret train() call, since it is choosing your hyperparameters of decay and size via bootstrapping or some other approach that you can specify via the trControl parameter. I call the data set I use for characterizing the error of the final chosen model my test set. Since caret handles selection of hyperparameters for you, you just need a training set and a test set. You can use the createDataPartition() function in caret to split your data set into training and test sets. I tested this using the Prestige data set from the car package, which has information about income as related to level of education and occupational prestige: library(car) library(caret) trainIndex <- createDataPartition(Prestige$income, p=.7, list=F) prestige.train <- Prestige[trainIndex, ] prestige.test <- Prestige[-trainIndex, ] The createDataPartition() function seems a little misnamed because it doesn't create the partition for you, but rather provides a vector of indexes that you then can use to construct training and test sets. It's pretty easy to do this yourself in R using sample() but one thing createDataPartition() apparently does do is sample from within factor levels. Moreover, if your outcome is categorical, the distribution is maintained across the data partitions. It's not relevant in this case, however, since your outcome is continuous. Now you can train your model on the training set: my.grid <- expand.grid(.decay = c(0.5, 0.1), .size = c(5, 6, 7)) prestige.fit <- train(income ~ prestige + education, data = prestige.train, method = "nnet", maxit = 1000, tuneGrid = my.grid, trace = F, linout = 1) Aside: I had to add the linout parameter to get nnet to work with a regression (vs. classification) problem. Otherwise I got all 1s as predicted values from the model. You can then call predict on the fit object using the test data set and calculate RMSE from the results: prestige.predict <- predict(prestige.fit, newdata = prestige.test) prestige.rmse <- sqrt(mean((prestige.predict - prestige.test$income)^2))
How to train and validate a neural network model in R?
Max Kuhn's caret Manual - Model Building is a great starting point. I would think of the validation stage as occurring within the caret train() call, since it is choosing your hyperparameters of deca
How to train and validate a neural network model in R? Max Kuhn's caret Manual - Model Building is a great starting point. I would think of the validation stage as occurring within the caret train() call, since it is choosing your hyperparameters of decay and size via bootstrapping or some other approach that you can specify via the trControl parameter. I call the data set I use for characterizing the error of the final chosen model my test set. Since caret handles selection of hyperparameters for you, you just need a training set and a test set. You can use the createDataPartition() function in caret to split your data set into training and test sets. I tested this using the Prestige data set from the car package, which has information about income as related to level of education and occupational prestige: library(car) library(caret) trainIndex <- createDataPartition(Prestige$income, p=.7, list=F) prestige.train <- Prestige[trainIndex, ] prestige.test <- Prestige[-trainIndex, ] The createDataPartition() function seems a little misnamed because it doesn't create the partition for you, but rather provides a vector of indexes that you then can use to construct training and test sets. It's pretty easy to do this yourself in R using sample() but one thing createDataPartition() apparently does do is sample from within factor levels. Moreover, if your outcome is categorical, the distribution is maintained across the data partitions. It's not relevant in this case, however, since your outcome is continuous. Now you can train your model on the training set: my.grid <- expand.grid(.decay = c(0.5, 0.1), .size = c(5, 6, 7)) prestige.fit <- train(income ~ prestige + education, data = prestige.train, method = "nnet", maxit = 1000, tuneGrid = my.grid, trace = F, linout = 1) Aside: I had to add the linout parameter to get nnet to work with a regression (vs. classification) problem. Otherwise I got all 1s as predicted values from the model. You can then call predict on the fit object using the test data set and calculate RMSE from the results: prestige.predict <- predict(prestige.fit, newdata = prestige.test) prestige.rmse <- sqrt(mean((prestige.predict - prestige.test$income)^2))
How to train and validate a neural network model in R? Max Kuhn's caret Manual - Model Building is a great starting point. I would think of the validation stage as occurring within the caret train() call, since it is choosing your hyperparameters of deca
8,024
What is the fiducial argument and why has it not been accepted?
I am surprised that you don't consider us authorities. Here is a good reference: Encyclopedia of Biostatistics, Volume 2, page 1526; article titled "Fisher, Ronald Aylmer." Starting at the bottom of the first column on the page and going through most of the second column the authors Joan Fisher Box (R. A. Fisher's daughter) and A. W. F. Edwards write Fisher introduced the the fiducial argument in 1930 [11].... Controversy arose immediately. fisher had proposed the fiducial argument as an alternative to the Bayesian argument of inverse probability, which he condemned when no objective prior probability could be stated. They go on to discuss the debates with Jeffreys and Neyman (particularly Neyman on confidence intervals). The Neyman-Pearson theory of hypothesis testing and confidence intervals came out in the 1930s after Fisher's article. A key sentence followed. Later difficulties with the fiducial argument arose in cases of multivariate estimation because of the nonuniqueness of the pivotals. In the same volume of the Encyclopedia of Biostatistics there is an article pp. 1510-1515 titled "Fiducial Probability" by Teddy Seidenfeld which covers the method in detail and compares fiducial intervals to confidence intervals. To quote from the last paragraph of that article, In a 1963 conference on fiducial probability, Savage wrote 'The aim of fiducial probability ... seems to be what I term making the Bayesian omelet without breaking the Bayesian eggs.' In that sense, fiducial probability is impossible. As with many great intellectual contributions, what is of lasting value is what we learn trying to understand Fisher's insights on fiducial probability. (See Edwards[4] for much more on this theme.) His solution to the Behrens-Fisher problem, for example, was a brilliant treatment of nuisance parameters using Bayes' theorem. In this sense, "...the fiducial argument is 'learning from Fisher' [36, p.926]. Thus interpreted, it certainly remains a valuable addition to staistical lore. I think in these last few sentences Edwards is trying to put a favorable light on Fisher even though his theory was discredited. I am sure that you can find a wealth of information on this by going through these encyclopedia papers and similar ones in other statistics papers as well as biographical articles and books on Fisher. Some other references Box, J. Fisher (1978). "T. A. Fisher: The Life of a Scientist." Wiley, New York Fisher, R. A. (1930) Inverse Probability. Proceedings of the Cambridge Philosophical Society. 26, 528-535. Bennett, J. H. editor (1990) Statistical Inference and Analysis: Selected Correspondence of R. A. Fisher. Clarendon Press, Oxford. Edwards, A. W. F. (1995). Fiducial inference and the fundamental theorm of natural selection. Biometrics 51,799-809. Savage L. J. (1963) Discussion. Bulletin of the International Statistical Institute 40, 925-927. Seidenfeld, T. (1979). "Philosophical Problems of Statistical Inference" Reidel, Dordrecht . Seidenfeld, T. (1992). R. A. Fisher's fiducial argument and Bayes' theorem. Statistical Science 7, 358-368. Tukey, J. W. (1957). Some examples with fiducial relevance. Annals of Mathematical Statistics 28, 687-695. Zabell, S. L. (1992). R. A. Fisher and the fiducial argument. Statistical Science 7, 369-387. The cocept is difficult to understand because fisher kept changing it as Seidenfeld said in his article in the Encyclopedia of Biostatistics Following the 1930 publication, during the remaining 32 years of his life, through two books and numerous articles , Fisher steadfastly held to the idea captured in (1), and the reasoning leading to it which we may call'fiducial inverse inference' then there is little wonder that Fisher caused such puzzles with his novel idea Equation (1) that Seidenfeld refers to is the fiducial distribution of the parameter $\theta$ given $x$ as $\text{fid}(\theta|x) \propto \partial F/\partial \theta$ where $F(x,\theta)$ denotes a one-parameter cumulative distribution function for the random variable $X$ at $x$ with parameter $\theta$. At least this was Fisher's initial definition. Later it got extended to multiple parameters and that is where the trouble began with the nuisance parameter $\sigma$ in the Behrens-Fisher problem. So a fiducial distribution is like a posterior distribution for the parameter $\theta$ given the observed data $x$. But it is constructed without the inclusion of a prior distribution on $\theta$. I went to some trouble getting all this but it is not hard to find. We are really not needed to answer questions like this. A Google search with key words "fiducial inference" would likely show everything I found and a whole lot more. I did a Google search and found that a UNC Professor Jan Hannig has generalized fiducial inference in an attempt to improve it. A Google search yields a number of his recent papers and a powerpoint presentation. I am going to copy and paste the last two slides from his presentation below: Concluding Remarks Generalized fiducial distributions lead often to attractive solution with asymptotically correct frequentist coverage. Many simulation studies show that generalized fiducial solutions have very good small sample properties. Current popularity of generalized inference in some applied circles suggests that if computers were available 70 years ago, fiducial inference might not have been rejected. Quotes Zabell (1992) “Fiducial inference stands as R. A. Fisher’s one great failure.” Efron (1998) “Maybe Fisher’s biggest blunder will become a big hit in the 21st century! " Just to add more references, here is the reference list I have taken from Hannig's 2009 Statistics Sinica paper. Pardon the repetition but I think this will be helpful. Burch, B. D. and Iyer, H. K. (1997). Exact confidence intervals for a variance ratio (or heri- tability) in a mixed linear model. Biometrics 53, 1318-1333. Burdick, R. K., Borror, C. M. and Montgomery, D. C. (2005a). Design and Analysis of Gauge R&R Studies. ASA-SIAM Series on Statistics and Applied Probability. Philadelphia, PA: Society for Industrial and Applied Mathematics. Burdick, R. K., Park, Y.-J., Montgomery, D. C. and Borror, C. M. (2005b). Confidence intervals for misclassification rates in a gauge R&R study. J. Quality Tech. 37, 294-303. Cai, T. T. (2005). One-sided confidence intervals in discrete distributions. J. Statist. Plann. Inference 131, 63-88. Casella, G. and Berger, R. L. (2002). Statistical Inference. Wadsworth and Brooks/Cole Ad- vanced Books and Software, Pacific Grove, CA, second edn. Daniels, L., Burdick, R. K. and Quiroz, J. (2005). Confidence Intervals in a Gauge R&R Study with Fixed Operators. J. Quality Tech. 37, 179-185. Dawid, A. P. and Stone, M. (1982). The functional-model basis of fiducial inference. Ann. Statist. 10, 1054-1074. With discussions by G. A. Barnard and by D. A. S. Fraser, and a reply by the authors. Dawid, A. P., Stone, M. and Zidek, J. V. (1973). Marginalization paradoxes in Bayesian and structural inference. J. Roy. Statist. Soc. Ser. B 35, 189-233. With discussion by D. J. Bartholomew, A. D. McLaren, D. V. Lindley, Bradley Efron, J. Dickey, G. N. Wilkinson, A. P.Dempster, D. V. Hinkley, M. R. Novick, Seymour Geisser, D. A. S. Fraser and A. Zellner, and a reply by A. P. Dawid, M. Stone, and J. V. Zidek. Dempster, A. P. (1966). New methods for reasoning towards posterior distributions based on sample data. Ann. Math. Statist. 37, 355-374. Dempster, A. P. (1968). A generalization of Bayesian inference. (With discussion). J. Roy. Statist. Soc. B 30, 205-247. Dempster, A. P. (2008). The Dempster-Shafer calculus for statisticians. International Journal of Approximate Reasoning 48, 365-377. E, L., Hannig, J. and Iyer, H. K. (2008). Fiducial intervals for variance components in an unbalanced two-component normal mixed linear model. J. Amer. Statist. Assoc. 103, 854- 865. Efron, B. (1998). R. A. Fisher in the 21st century. Statist. Sci. 13, 95-122. With comments and a rejoinder by the author. Fisher, R. A. (1930). Inverse probability. Proceedings of the Cambridge Philosophical Society xxvi, 528-535. Fisher, R. A. (1933). The concepts of inverse probability and fiducial probability referring to unknown parameters. Proceedings of the Royal Society of London A 139, 343-348. Fisher, R. A. (1935a). The fiducial argument in statistical inference. Ann. Eugenics VI, 91-98. Fisher, R. A. (1935b). The logic of inductive inference. J. Roy. Statist. Soc. B 98, 29-82. Fraser, D. A. S. (1961). On fiducial inference. Ann. Math. Statist. 32, 661-676. Fraser, D. A. S. (1966). Structural probability and a generalization. Biometrika 53, 1–9. Fraser, D. A. S. (1968). The Structure of Inference. John Wiley & Sons, New York-London- Sydney. Fraser, D. A. S. (2006). Fiducial inference. In The New Palgrave Dictionary of Economics (Edited by S. Durlauf and L. Blume). Palgrave Macmillan, 2nd edition. ON GENERALIZED FIDUCIAL INFERENCE 543 Ghosh, J. K. (1994). Higher Order Assymptotics. NSF-CBMS Regional Conference Series. Hay- ward: Institute of Mathematical Statistics. Ghosh, J. K. and Ramamoorthi, R. V. (2003). Bayesian Nonparametrics. Springer Series in Statistics. Springer-Verlag, New York. Glagovskiy, Y. S. (2006). Construction of Fiducial Confidence Intervals For the Mixture of Cauchy and Normal Distributions. Master’s thesis, Department of Statistics, Colorado State University. Grundy, P. M. (1956). Fiducial distributions and prior distributions: an example in which the former cannot be associated with the latter. J. Roy. Statist. Soc. Ser. B 18, 217-221. GUM (1995). Guide to the Expression of Uncertainty in Measurement. International Organiza- tion for Standardization (ISO), Geneva, Switzerland. Hamada, M. and Weerahandi, S. (2000). Measurement system assessment via generalized infer- ence. J. Quality Tech. 32, 241-253. Hannig, J. (1996). On conditional distributions as limits of martingales. Mgr. thesis, (in czech), Charles University, Prague, Czech Republic. Hannig, J., E, L., Abdel-Karim, A. and Iyer, H. K. (2006a) Simultaneous fiducial generalized confidence intervals for ratios of means of lognormal distributions. Austral. J. Statist. 35, 261-269. Hannig, J., Iyer, H. K. and Patterson, P. (2006b) Fiducial generalized confidence intervals. J. Amer. Statist. Assoc. 101, 254-269. Hannig, J. and Lee, T. C. M. (2007). Generalized fiducial inference for wavelet regression. Tech. rep., Colorado State University. Iyer, H. K. and Patterson, P. (2002). A recipe for constructing generalized pivotal quantities and generalized confidence intervals. Tech. Rep. 2002/10, Department of Statistics, Colorado State University. Iyer, H. K., Wang, C. M. J. and Mathew, T. (2004). Models and confidence intervals for true values in interlaboratory trials. J. Amer. Statist. Assoc. 99, 1060-1071. Jeffreys, H. (1940). Note on the Behrens-Fisher formula. Ann. Eugenics 10, 48-51. Jeffreys, H. (1961). Theory of Probability. Clarendon Press, Oxford, third edn. Le Cam, L. and Yang, G. L. (2000). Asymptotics in Statistics. Springer Series in Statistics. New York: Springer-Verlag, second edn. Liao, C. T. and Iyer, H. K. (2004). A tolerance interval for the normal distribution with several variance components. Statist. Sinica 14, 217-229. Lindley, D. V. (1958). Fiducial distributions and Bayes’ theorem. J. Roy. Statist. Soc. Ser. B 20, 102-107. McNally, R. J., Iyer, H. K. and Mathew, T. (2003). Tests for individual and population bioe- quivalence based on generalized p-values. Statistics in Medicine 22, 31-53. Mood, A. M., Graybill, F. A. and Boes, D. C. (1974). Introduction to the Theory of Statistics. McGraw-Hill, third edn. Pounds, S. and Morris, S. W. (2003). Estimating the occurrence of false positives and false neg- atives in microarray studies by approximating and partitioning the empirical distribution of p-values. Bioinformatics 19, 123601242. Salome, D. (1998). Staristical Inference via Fiducial Methods. Ph.D. thesis, University of Gronin- gen. 544 JAN HANNIG Searle, S. R., Casella, G. and McCulloch, C. E. (1992). Variance Components. John Wiley & Sons, New York. Stevens, W. L. (1950). Fiducial limits of the parameter of a discontinuous distribution. Biometrika 37, 117-129. Tsui, K.-W. and Weerahandi, S. (1989). Generalized p-values in significance testing of hypothe- ses in the presence of nuisance parameters. J. Amer. Statist. Assoc. 84, 602-607. Wang, C. M. and Iyer, H. K. (2005). Propagation of uncertainties in measurements using gen- eralized inference. Metrologia 42, 145-153. Wang, C. M. and Iyer, H. K. (2006a). A generalized confidence interval for a measurand in the presence of type-A and type-B uncertainties. Measurement 39, 856–863. Wang, C. M. and Iyer, H. K. (2006b). Uncertainty analysis for vector measurands using fiducial inference. Metrologia 43, 486-494. Weerahandi, S. (1993). Generalized confidence intervals. J. Amer. Statist. Assoc. 88, 899-905. Weerahandi, S. (2004). Generalized Inference in Repeated Measures. Wiley, Hoboken, NJ. Wilkinson, G. N. (1977). On resolving the controversy in statistical inference. J. Roy. Statist. Soc. Ser. B 39, 119-171. With discussion. Yeo, I.-K. and Johnson, R. A. (2001). A uniform strong law of large numbers for U-statistics with application to transforming to near symmetry. Statist. Probab. Lett. 51, 63-69. Zabell, S. L. (1992). R. A. Fisher and the fiducial argument. Statist. Sci. 7, 369-387. Department of Statistics and Operations Research, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599-3260, U.S.A. E-mail: hannig@unc.edu (Received November 2006; accepted December 2007) The article i got this from is Statistica Sinica 19 (2009), 491-544 ON GENERALIZED FIDUCIAL INFERENCE∗ Jan Hannig The University of North Carolina at Chapel Hill
What is the fiducial argument and why has it not been accepted?
I am surprised that you don't consider us authorities. Here is a good reference: Encyclopedia of Biostatistics, Volume 2, page 1526; article titled "Fisher, Ronald Aylmer." Starting at the bottom of
What is the fiducial argument and why has it not been accepted? I am surprised that you don't consider us authorities. Here is a good reference: Encyclopedia of Biostatistics, Volume 2, page 1526; article titled "Fisher, Ronald Aylmer." Starting at the bottom of the first column on the page and going through most of the second column the authors Joan Fisher Box (R. A. Fisher's daughter) and A. W. F. Edwards write Fisher introduced the the fiducial argument in 1930 [11].... Controversy arose immediately. fisher had proposed the fiducial argument as an alternative to the Bayesian argument of inverse probability, which he condemned when no objective prior probability could be stated. They go on to discuss the debates with Jeffreys and Neyman (particularly Neyman on confidence intervals). The Neyman-Pearson theory of hypothesis testing and confidence intervals came out in the 1930s after Fisher's article. A key sentence followed. Later difficulties with the fiducial argument arose in cases of multivariate estimation because of the nonuniqueness of the pivotals. In the same volume of the Encyclopedia of Biostatistics there is an article pp. 1510-1515 titled "Fiducial Probability" by Teddy Seidenfeld which covers the method in detail and compares fiducial intervals to confidence intervals. To quote from the last paragraph of that article, In a 1963 conference on fiducial probability, Savage wrote 'The aim of fiducial probability ... seems to be what I term making the Bayesian omelet without breaking the Bayesian eggs.' In that sense, fiducial probability is impossible. As with many great intellectual contributions, what is of lasting value is what we learn trying to understand Fisher's insights on fiducial probability. (See Edwards[4] for much more on this theme.) His solution to the Behrens-Fisher problem, for example, was a brilliant treatment of nuisance parameters using Bayes' theorem. In this sense, "...the fiducial argument is 'learning from Fisher' [36, p.926]. Thus interpreted, it certainly remains a valuable addition to staistical lore. I think in these last few sentences Edwards is trying to put a favorable light on Fisher even though his theory was discredited. I am sure that you can find a wealth of information on this by going through these encyclopedia papers and similar ones in other statistics papers as well as biographical articles and books on Fisher. Some other references Box, J. Fisher (1978). "T. A. Fisher: The Life of a Scientist." Wiley, New York Fisher, R. A. (1930) Inverse Probability. Proceedings of the Cambridge Philosophical Society. 26, 528-535. Bennett, J. H. editor (1990) Statistical Inference and Analysis: Selected Correspondence of R. A. Fisher. Clarendon Press, Oxford. Edwards, A. W. F. (1995). Fiducial inference and the fundamental theorm of natural selection. Biometrics 51,799-809. Savage L. J. (1963) Discussion. Bulletin of the International Statistical Institute 40, 925-927. Seidenfeld, T. (1979). "Philosophical Problems of Statistical Inference" Reidel, Dordrecht . Seidenfeld, T. (1992). R. A. Fisher's fiducial argument and Bayes' theorem. Statistical Science 7, 358-368. Tukey, J. W. (1957). Some examples with fiducial relevance. Annals of Mathematical Statistics 28, 687-695. Zabell, S. L. (1992). R. A. Fisher and the fiducial argument. Statistical Science 7, 369-387. The cocept is difficult to understand because fisher kept changing it as Seidenfeld said in his article in the Encyclopedia of Biostatistics Following the 1930 publication, during the remaining 32 years of his life, through two books and numerous articles , Fisher steadfastly held to the idea captured in (1), and the reasoning leading to it which we may call'fiducial inverse inference' then there is little wonder that Fisher caused such puzzles with his novel idea Equation (1) that Seidenfeld refers to is the fiducial distribution of the parameter $\theta$ given $x$ as $\text{fid}(\theta|x) \propto \partial F/\partial \theta$ where $F(x,\theta)$ denotes a one-parameter cumulative distribution function for the random variable $X$ at $x$ with parameter $\theta$. At least this was Fisher's initial definition. Later it got extended to multiple parameters and that is where the trouble began with the nuisance parameter $\sigma$ in the Behrens-Fisher problem. So a fiducial distribution is like a posterior distribution for the parameter $\theta$ given the observed data $x$. But it is constructed without the inclusion of a prior distribution on $\theta$. I went to some trouble getting all this but it is not hard to find. We are really not needed to answer questions like this. A Google search with key words "fiducial inference" would likely show everything I found and a whole lot more. I did a Google search and found that a UNC Professor Jan Hannig has generalized fiducial inference in an attempt to improve it. A Google search yields a number of his recent papers and a powerpoint presentation. I am going to copy and paste the last two slides from his presentation below: Concluding Remarks Generalized fiducial distributions lead often to attractive solution with asymptotically correct frequentist coverage. Many simulation studies show that generalized fiducial solutions have very good small sample properties. Current popularity of generalized inference in some applied circles suggests that if computers were available 70 years ago, fiducial inference might not have been rejected. Quotes Zabell (1992) “Fiducial inference stands as R. A. Fisher’s one great failure.” Efron (1998) “Maybe Fisher’s biggest blunder will become a big hit in the 21st century! " Just to add more references, here is the reference list I have taken from Hannig's 2009 Statistics Sinica paper. Pardon the repetition but I think this will be helpful. Burch, B. D. and Iyer, H. K. (1997). Exact confidence intervals for a variance ratio (or heri- tability) in a mixed linear model. Biometrics 53, 1318-1333. Burdick, R. K., Borror, C. M. and Montgomery, D. C. (2005a). Design and Analysis of Gauge R&R Studies. ASA-SIAM Series on Statistics and Applied Probability. Philadelphia, PA: Society for Industrial and Applied Mathematics. Burdick, R. K., Park, Y.-J., Montgomery, D. C. and Borror, C. M. (2005b). Confidence intervals for misclassification rates in a gauge R&R study. J. Quality Tech. 37, 294-303. Cai, T. T. (2005). One-sided confidence intervals in discrete distributions. J. Statist. Plann. Inference 131, 63-88. Casella, G. and Berger, R. L. (2002). Statistical Inference. Wadsworth and Brooks/Cole Ad- vanced Books and Software, Pacific Grove, CA, second edn. Daniels, L., Burdick, R. K. and Quiroz, J. (2005). Confidence Intervals in a Gauge R&R Study with Fixed Operators. J. Quality Tech. 37, 179-185. Dawid, A. P. and Stone, M. (1982). The functional-model basis of fiducial inference. Ann. Statist. 10, 1054-1074. With discussions by G. A. Barnard and by D. A. S. Fraser, and a reply by the authors. Dawid, A. P., Stone, M. and Zidek, J. V. (1973). Marginalization paradoxes in Bayesian and structural inference. J. Roy. Statist. Soc. Ser. B 35, 189-233. With discussion by D. J. Bartholomew, A. D. McLaren, D. V. Lindley, Bradley Efron, J. Dickey, G. N. Wilkinson, A. P.Dempster, D. V. Hinkley, M. R. Novick, Seymour Geisser, D. A. S. Fraser and A. Zellner, and a reply by A. P. Dawid, M. Stone, and J. V. Zidek. Dempster, A. P. (1966). New methods for reasoning towards posterior distributions based on sample data. Ann. Math. Statist. 37, 355-374. Dempster, A. P. (1968). A generalization of Bayesian inference. (With discussion). J. Roy. Statist. Soc. B 30, 205-247. Dempster, A. P. (2008). The Dempster-Shafer calculus for statisticians. International Journal of Approximate Reasoning 48, 365-377. E, L., Hannig, J. and Iyer, H. K. (2008). Fiducial intervals for variance components in an unbalanced two-component normal mixed linear model. J. Amer. Statist. Assoc. 103, 854- 865. Efron, B. (1998). R. A. Fisher in the 21st century. Statist. Sci. 13, 95-122. With comments and a rejoinder by the author. Fisher, R. A. (1930). Inverse probability. Proceedings of the Cambridge Philosophical Society xxvi, 528-535. Fisher, R. A. (1933). The concepts of inverse probability and fiducial probability referring to unknown parameters. Proceedings of the Royal Society of London A 139, 343-348. Fisher, R. A. (1935a). The fiducial argument in statistical inference. Ann. Eugenics VI, 91-98. Fisher, R. A. (1935b). The logic of inductive inference. J. Roy. Statist. Soc. B 98, 29-82. Fraser, D. A. S. (1961). On fiducial inference. Ann. Math. Statist. 32, 661-676. Fraser, D. A. S. (1966). Structural probability and a generalization. Biometrika 53, 1–9. Fraser, D. A. S. (1968). The Structure of Inference. John Wiley & Sons, New York-London- Sydney. Fraser, D. A. S. (2006). Fiducial inference. In The New Palgrave Dictionary of Economics (Edited by S. Durlauf and L. Blume). Palgrave Macmillan, 2nd edition. ON GENERALIZED FIDUCIAL INFERENCE 543 Ghosh, J. K. (1994). Higher Order Assymptotics. NSF-CBMS Regional Conference Series. Hay- ward: Institute of Mathematical Statistics. Ghosh, J. K. and Ramamoorthi, R. V. (2003). Bayesian Nonparametrics. Springer Series in Statistics. Springer-Verlag, New York. Glagovskiy, Y. S. (2006). Construction of Fiducial Confidence Intervals For the Mixture of Cauchy and Normal Distributions. Master’s thesis, Department of Statistics, Colorado State University. Grundy, P. M. (1956). Fiducial distributions and prior distributions: an example in which the former cannot be associated with the latter. J. Roy. Statist. Soc. Ser. B 18, 217-221. GUM (1995). Guide to the Expression of Uncertainty in Measurement. International Organiza- tion for Standardization (ISO), Geneva, Switzerland. Hamada, M. and Weerahandi, S. (2000). Measurement system assessment via generalized infer- ence. J. Quality Tech. 32, 241-253. Hannig, J. (1996). On conditional distributions as limits of martingales. Mgr. thesis, (in czech), Charles University, Prague, Czech Republic. Hannig, J., E, L., Abdel-Karim, A. and Iyer, H. K. (2006a) Simultaneous fiducial generalized confidence intervals for ratios of means of lognormal distributions. Austral. J. Statist. 35, 261-269. Hannig, J., Iyer, H. K. and Patterson, P. (2006b) Fiducial generalized confidence intervals. J. Amer. Statist. Assoc. 101, 254-269. Hannig, J. and Lee, T. C. M. (2007). Generalized fiducial inference for wavelet regression. Tech. rep., Colorado State University. Iyer, H. K. and Patterson, P. (2002). A recipe for constructing generalized pivotal quantities and generalized confidence intervals. Tech. Rep. 2002/10, Department of Statistics, Colorado State University. Iyer, H. K., Wang, C. M. J. and Mathew, T. (2004). Models and confidence intervals for true values in interlaboratory trials. J. Amer. Statist. Assoc. 99, 1060-1071. Jeffreys, H. (1940). Note on the Behrens-Fisher formula. Ann. Eugenics 10, 48-51. Jeffreys, H. (1961). Theory of Probability. Clarendon Press, Oxford, third edn. Le Cam, L. and Yang, G. L. (2000). Asymptotics in Statistics. Springer Series in Statistics. New York: Springer-Verlag, second edn. Liao, C. T. and Iyer, H. K. (2004). A tolerance interval for the normal distribution with several variance components. Statist. Sinica 14, 217-229. Lindley, D. V. (1958). Fiducial distributions and Bayes’ theorem. J. Roy. Statist. Soc. Ser. B 20, 102-107. McNally, R. J., Iyer, H. K. and Mathew, T. (2003). Tests for individual and population bioe- quivalence based on generalized p-values. Statistics in Medicine 22, 31-53. Mood, A. M., Graybill, F. A. and Boes, D. C. (1974). Introduction to the Theory of Statistics. McGraw-Hill, third edn. Pounds, S. and Morris, S. W. (2003). Estimating the occurrence of false positives and false neg- atives in microarray studies by approximating and partitioning the empirical distribution of p-values. Bioinformatics 19, 123601242. Salome, D. (1998). Staristical Inference via Fiducial Methods. Ph.D. thesis, University of Gronin- gen. 544 JAN HANNIG Searle, S. R., Casella, G. and McCulloch, C. E. (1992). Variance Components. John Wiley & Sons, New York. Stevens, W. L. (1950). Fiducial limits of the parameter of a discontinuous distribution. Biometrika 37, 117-129. Tsui, K.-W. and Weerahandi, S. (1989). Generalized p-values in significance testing of hypothe- ses in the presence of nuisance parameters. J. Amer. Statist. Assoc. 84, 602-607. Wang, C. M. and Iyer, H. K. (2005). Propagation of uncertainties in measurements using gen- eralized inference. Metrologia 42, 145-153. Wang, C. M. and Iyer, H. K. (2006a). A generalized confidence interval for a measurand in the presence of type-A and type-B uncertainties. Measurement 39, 856–863. Wang, C. M. and Iyer, H. K. (2006b). Uncertainty analysis for vector measurands using fiducial inference. Metrologia 43, 486-494. Weerahandi, S. (1993). Generalized confidence intervals. J. Amer. Statist. Assoc. 88, 899-905. Weerahandi, S. (2004). Generalized Inference in Repeated Measures. Wiley, Hoboken, NJ. Wilkinson, G. N. (1977). On resolving the controversy in statistical inference. J. Roy. Statist. Soc. Ser. B 39, 119-171. With discussion. Yeo, I.-K. and Johnson, R. A. (2001). A uniform strong law of large numbers for U-statistics with application to transforming to near symmetry. Statist. Probab. Lett. 51, 63-69. Zabell, S. L. (1992). R. A. Fisher and the fiducial argument. Statist. Sci. 7, 369-387. Department of Statistics and Operations Research, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599-3260, U.S.A. E-mail: hannig@unc.edu (Received November 2006; accepted December 2007) The article i got this from is Statistica Sinica 19 (2009), 491-544 ON GENERALIZED FIDUCIAL INFERENCE∗ Jan Hannig The University of North Carolina at Chapel Hill
What is the fiducial argument and why has it not been accepted? I am surprised that you don't consider us authorities. Here is a good reference: Encyclopedia of Biostatistics, Volume 2, page 1526; article titled "Fisher, Ronald Aylmer." Starting at the bottom of
8,025
What is the fiducial argument and why has it not been accepted?
Fiducial inference sometimes interprets likelihoods as probabilities for the parameter $\theta$. That is, $M(x)L(\theta|x)$, provided that $M(x)$ is finite, is interpreted as a probability density function for $\theta$ in which $L(\theta|x)$ is the likelihood function of $\theta$ and $M(x)=(\int_{-\infty}^{\infty}L(\theta|x)d\theta)^{-1}$. You can see Casella and Berger, pages 291-2, for more details.
What is the fiducial argument and why has it not been accepted?
Fiducial inference sometimes interprets likelihoods as probabilities for the parameter $\theta$. That is, $M(x)L(\theta|x)$, provided that $M(x)$ is finite, is interpreted as a probability density fun
What is the fiducial argument and why has it not been accepted? Fiducial inference sometimes interprets likelihoods as probabilities for the parameter $\theta$. That is, $M(x)L(\theta|x)$, provided that $M(x)$ is finite, is interpreted as a probability density function for $\theta$ in which $L(\theta|x)$ is the likelihood function of $\theta$ and $M(x)=(\int_{-\infty}^{\infty}L(\theta|x)d\theta)^{-1}$. You can see Casella and Berger, pages 291-2, for more details.
What is the fiducial argument and why has it not been accepted? Fiducial inference sometimes interprets likelihoods as probabilities for the parameter $\theta$. That is, $M(x)L(\theta|x)$, provided that $M(x)$ is finite, is interpreted as a probability density fun
8,026
What is the fiducial argument and why has it not been accepted?
Just to add to what is said, there was controversy between Fisher and Neyman about significance testing and interval estimation. Neyman defined confidence intervals while Fisher introduced fiducial intervals. They argued differently about their construction but the constructed intervals were usually the same. So the difference in the definitions was largely ignored until it was discovered that they differed when dealing with the Behrens-Fisher problem. Fisher argued adamantly for the fiducial approach but inspite of his brillance and his strong advocation of the method, there appeared to be flaws and since the statistical community considers it discredited it is not commonly discussed or used. The Bayesian and frequentist approaches to inference are the two that remain.
What is the fiducial argument and why has it not been accepted?
Just to add to what is said, there was controversy between Fisher and Neyman about significance testing and interval estimation. Neyman defined confidence intervals while Fisher introduced fiducial i
What is the fiducial argument and why has it not been accepted? Just to add to what is said, there was controversy between Fisher and Neyman about significance testing and interval estimation. Neyman defined confidence intervals while Fisher introduced fiducial intervals. They argued differently about their construction but the constructed intervals were usually the same. So the difference in the definitions was largely ignored until it was discovered that they differed when dealing with the Behrens-Fisher problem. Fisher argued adamantly for the fiducial approach but inspite of his brillance and his strong advocation of the method, there appeared to be flaws and since the statistical community considers it discredited it is not commonly discussed or used. The Bayesian and frequentist approaches to inference are the two that remain.
What is the fiducial argument and why has it not been accepted? Just to add to what is said, there was controversy between Fisher and Neyman about significance testing and interval estimation. Neyman defined confidence intervals while Fisher introduced fiducial i
8,027
What is the fiducial argument and why has it not been accepted?
TL;DR The fiducial argument has not been accepted because the idea doesn't work. The fiducial distribution is disguised as something that looks like a probability distribution (and people might have wanted it to behave like a probability distribution) but it is not the same as a probability distribution. It is only a function of probabilities. You can not do the same thing with a fiducial distribution as with, for instance, a posterior probability density. This is illustrated in the example below where we compute a 80% fiducial interval but get in some cases a 100% coverage. Example where it doesn't work In the nice answer by Michael R. Chernick it is mentioned that the logic behind the fiducial distribution started to fail when people tried to apply it in a multidimensional setting like for instance the Behrens Fisher problem. Here we give a one dimensional example that already shows that it does not work. Let some statistic $X$ be distributed as a Uniform distribution with $$X \sim \mathcal{U}\left(\theta-0.5\sqrt{1+\theta^2},\theta+0.5\sqrt{1+\theta^2}\right)$$ We can plot the cumulative distribution function (CDF) $F(x;\theta)$ as a function of $x$ and $\theta$ using isolines. The image shows the CDF as function of $x$ and $\theta$ In vertical direction, for fixed $\theta$, the function describes the CDF of the observation $x$ which is a random variable. We can compute the probability density function as $$\frac{\partial}{\partial x} F(x;\theta)$$ In horizontal direction, for fixed $x$, the function describes the fiducial distribution for the estimation of $\theta$ We can compute the fiducial density as $$\frac{\partial}{\partial\theta} F(x;\theta)$$ For example the fiducial density, when we observed $x=0.1$ looks like. The points in red are inside the 80% interval [-0.396 , 0.475], it is the 80% probability mass with the highest density. The problem is the following: the probability statements by the fiducial distribution only 'work' when we consider the same quantiles, independent from the observation $x$. However when change the quantiles as function of $x$ then the content of the probability statements entailed by the fiducial distribution are changed and become false. Therefore, the distribution can not be used in a similar way as a probability density. An example when this happens is for instance like the example above when we compute a highest density interval in combination in combination with a fiducial density that doesn't have the same shape for different observations $x$ (which makes us select different quantiles). We can see this in the plot when $\theta = 0$ for that case we get a 100% coverage by the 80% highest density interval instead of a 80% coverage. This is not what you would expect when the fiducial distribution could be used as a probability density for $\theta$. R-code for the two plots: #### parameters for drawing d = 0.01 t = seq(-2,2.6,d) t2 = seq(3.4,4,d) tm = 3 grey = rgb(0.3,0.3,0.3) ### empty canvas plot(-10,-10, xlim = c(-2,4), ylim = c(-3,5), xlab = expression(theta), ylab = "x", main = "example of highest density 80% feducial interval") ### add isolines for (q in c(0:10)*0.1) { lines(t,t+(q-0.5)*sqrt(1+t^2), col = grey) lines(t2,t2+(q-0.5)*sqrt(1+t2^2), col = grey) text(tm,tm+(q-0.5)*sqrt(1+tm^2), bquote(F(x*";"*theta)==.(q)), col = grey, cex = 0.6, srt = 15+35*q) } fiducial = function(x, plotting = TRUE, alpha = 0.8) { dt = 0.001 ### domain of fiducial distribution tmin = (1/3) * (4*x - sqrt(4*x^2 + 3)) tmax = (1/3) * (4*x + sqrt(4*x^2 + 3)) ts = seq(tmin, tmax, dt) ### compute distribution f = (ts*x + 1)/(ts^2+1)^1.5 ### calculate highest density region by ordering densities ord = order(f) #order p = cumsum(f[ord])*dt #cumulative probability sel = which(p<1-alpha) # select complement of highest alpha% density output = range(ts[ord][-sel]) ## range of the highest alpha% density interval ### example plot of density if(plotting == TRUE) { plot(ts,f, col = 1 + (ts >= output[1]) * (ts <= output[2])) } output } ### compute intervals as function of observed x xs = seq(-4,5,0.01) low = c() ### empty array that will be filled high = c() for (x in xs) { interval = fiducial(x, plotting = FALSE) low = c(low, interval[1]) high = c(high, interval[2]) } ### add curves for fiducial interval lines(low[low<2.6],xs[low<2.6], lwd =2) lines(high[high<2.6],xs[high<2.6], lwd =2) ### add example interval for observation X = 0.1 int = fiducial(0.1, plotting = 0) #lines(c(-2,2),c(0.1,0.1), lty = 2) lines(int,c(0.1,0.1), lwd =2, col = 2) points(int, c(0.1,0.1), pch = 21, col = 2, bg = 0, cex = 0.7) text(0.8,0.1,"example interval if x=0.1", pos = 4, col = 2) ### example plot fiducial(0.1) title("example fiducial distribution if x = 0.1 \n highest 80% density is highlighted in red")
What is the fiducial argument and why has it not been accepted?
TL;DR The fiducial argument has not been accepted because the idea doesn't work. The fiducial distribution is disguised as something that looks like a probability distribution (and people might have w
What is the fiducial argument and why has it not been accepted? TL;DR The fiducial argument has not been accepted because the idea doesn't work. The fiducial distribution is disguised as something that looks like a probability distribution (and people might have wanted it to behave like a probability distribution) but it is not the same as a probability distribution. It is only a function of probabilities. You can not do the same thing with a fiducial distribution as with, for instance, a posterior probability density. This is illustrated in the example below where we compute a 80% fiducial interval but get in some cases a 100% coverage. Example where it doesn't work In the nice answer by Michael R. Chernick it is mentioned that the logic behind the fiducial distribution started to fail when people tried to apply it in a multidimensional setting like for instance the Behrens Fisher problem. Here we give a one dimensional example that already shows that it does not work. Let some statistic $X$ be distributed as a Uniform distribution with $$X \sim \mathcal{U}\left(\theta-0.5\sqrt{1+\theta^2},\theta+0.5\sqrt{1+\theta^2}\right)$$ We can plot the cumulative distribution function (CDF) $F(x;\theta)$ as a function of $x$ and $\theta$ using isolines. The image shows the CDF as function of $x$ and $\theta$ In vertical direction, for fixed $\theta$, the function describes the CDF of the observation $x$ which is a random variable. We can compute the probability density function as $$\frac{\partial}{\partial x} F(x;\theta)$$ In horizontal direction, for fixed $x$, the function describes the fiducial distribution for the estimation of $\theta$ We can compute the fiducial density as $$\frac{\partial}{\partial\theta} F(x;\theta)$$ For example the fiducial density, when we observed $x=0.1$ looks like. The points in red are inside the 80% interval [-0.396 , 0.475], it is the 80% probability mass with the highest density. The problem is the following: the probability statements by the fiducial distribution only 'work' when we consider the same quantiles, independent from the observation $x$. However when change the quantiles as function of $x$ then the content of the probability statements entailed by the fiducial distribution are changed and become false. Therefore, the distribution can not be used in a similar way as a probability density. An example when this happens is for instance like the example above when we compute a highest density interval in combination in combination with a fiducial density that doesn't have the same shape for different observations $x$ (which makes us select different quantiles). We can see this in the plot when $\theta = 0$ for that case we get a 100% coverage by the 80% highest density interval instead of a 80% coverage. This is not what you would expect when the fiducial distribution could be used as a probability density for $\theta$. R-code for the two plots: #### parameters for drawing d = 0.01 t = seq(-2,2.6,d) t2 = seq(3.4,4,d) tm = 3 grey = rgb(0.3,0.3,0.3) ### empty canvas plot(-10,-10, xlim = c(-2,4), ylim = c(-3,5), xlab = expression(theta), ylab = "x", main = "example of highest density 80% feducial interval") ### add isolines for (q in c(0:10)*0.1) { lines(t,t+(q-0.5)*sqrt(1+t^2), col = grey) lines(t2,t2+(q-0.5)*sqrt(1+t2^2), col = grey) text(tm,tm+(q-0.5)*sqrt(1+tm^2), bquote(F(x*";"*theta)==.(q)), col = grey, cex = 0.6, srt = 15+35*q) } fiducial = function(x, plotting = TRUE, alpha = 0.8) { dt = 0.001 ### domain of fiducial distribution tmin = (1/3) * (4*x - sqrt(4*x^2 + 3)) tmax = (1/3) * (4*x + sqrt(4*x^2 + 3)) ts = seq(tmin, tmax, dt) ### compute distribution f = (ts*x + 1)/(ts^2+1)^1.5 ### calculate highest density region by ordering densities ord = order(f) #order p = cumsum(f[ord])*dt #cumulative probability sel = which(p<1-alpha) # select complement of highest alpha% density output = range(ts[ord][-sel]) ## range of the highest alpha% density interval ### example plot of density if(plotting == TRUE) { plot(ts,f, col = 1 + (ts >= output[1]) * (ts <= output[2])) } output } ### compute intervals as function of observed x xs = seq(-4,5,0.01) low = c() ### empty array that will be filled high = c() for (x in xs) { interval = fiducial(x, plotting = FALSE) low = c(low, interval[1]) high = c(high, interval[2]) } ### add curves for fiducial interval lines(low[low<2.6],xs[low<2.6], lwd =2) lines(high[high<2.6],xs[high<2.6], lwd =2) ### add example interval for observation X = 0.1 int = fiducial(0.1, plotting = 0) #lines(c(-2,2),c(0.1,0.1), lty = 2) lines(int,c(0.1,0.1), lwd =2, col = 2) points(int, c(0.1,0.1), pch = 21, col = 2, bg = 0, cex = 0.7) text(0.8,0.1,"example interval if x=0.1", pos = 4, col = 2) ### example plot fiducial(0.1) title("example fiducial distribution if x = 0.1 \n highest 80% density is highlighted in red")
What is the fiducial argument and why has it not been accepted? TL;DR The fiducial argument has not been accepted because the idea doesn't work. The fiducial distribution is disguised as something that looks like a probability distribution (and people might have w
8,028
What is the fiducial argument and why has it not been accepted?
In a large undergraduate class of engineering intro stats at Georgia Tech, when discussing confidence intervals for the population mean with variance known, one student asked me (in the language of MATLAB): "Can I calculate the interval as > norminv([alpha/2,1-alpha/2], barX, sigma/sqrt(n))?" In translation: could he take $\frac{\alpha}{2}$ and $1-\frac{\alpha}{2}$ quantiles of a normal distribution centered at $\bar X$ with scale $\frac\sigma{\sqrt{n}}$? I said – of course YES, pleasantly surprised that he naturally arrived to the concept fiducial distribution.
What is the fiducial argument and why has it not been accepted?
In a large undergraduate class of engineering intro stats at Georgia Tech, when discussing confidence intervals for the population mean with variance known, one student asked me (in the language of MA
What is the fiducial argument and why has it not been accepted? In a large undergraduate class of engineering intro stats at Georgia Tech, when discussing confidence intervals for the population mean with variance known, one student asked me (in the language of MATLAB): "Can I calculate the interval as > norminv([alpha/2,1-alpha/2], barX, sigma/sqrt(n))?" In translation: could he take $\frac{\alpha}{2}$ and $1-\frac{\alpha}{2}$ quantiles of a normal distribution centered at $\bar X$ with scale $\frac\sigma{\sqrt{n}}$? I said – of course YES, pleasantly surprised that he naturally arrived to the concept fiducial distribution.
What is the fiducial argument and why has it not been accepted? In a large undergraduate class of engineering intro stats at Georgia Tech, when discussing confidence intervals for the population mean with variance known, one student asked me (in the language of MA
8,029
What are the differences between Logistic Function and Sigmoid Function?
Yes, the sigmoid function is a special case of the Logistic function when $L=1$, $k=1$, $x_0 =0$. If you play around with the parameters (Wolfram Alpha), you will see that $L$ is the maximum value the function can take. $e^{-k(x-x_0)}$ is always greater or equal than 0, so the maximum point is achieved when it it 0, and is at $L/1$. $x_0$ controls where on the $x$ axis the growth should the, because if you put $x_0$ in the function, $x_0 - x_0$ cancel out and $e^0 = 1$, so you end up with $f(x_0) = L/2$, the midpoint of the growth. the parameter $k$ controls how steep the change from the minimum to the maximum value is.
What are the differences between Logistic Function and Sigmoid Function?
Yes, the sigmoid function is a special case of the Logistic function when $L=1$, $k=1$, $x_0 =0$. If you play around with the parameters (Wolfram Alpha), you will see that $L$ is the maximum value th
What are the differences between Logistic Function and Sigmoid Function? Yes, the sigmoid function is a special case of the Logistic function when $L=1$, $k=1$, $x_0 =0$. If you play around with the parameters (Wolfram Alpha), you will see that $L$ is the maximum value the function can take. $e^{-k(x-x_0)}$ is always greater or equal than 0, so the maximum point is achieved when it it 0, and is at $L/1$. $x_0$ controls where on the $x$ axis the growth should the, because if you put $x_0$ in the function, $x_0 - x_0$ cancel out and $e^0 = 1$, so you end up with $f(x_0) = L/2$, the midpoint of the growth. the parameter $k$ controls how steep the change from the minimum to the maximum value is.
What are the differences between Logistic Function and Sigmoid Function? Yes, the sigmoid function is a special case of the Logistic function when $L=1$, $k=1$, $x_0 =0$. If you play around with the parameters (Wolfram Alpha), you will see that $L$ is the maximum value th
8,030
What are the differences between Logistic Function and Sigmoid Function?
The logistic function is: $$ f(x) = \frac{K}{1+Ce^{-rx}} $$ where $C$ is the constant from integration, $r$ is the proportionality constant, and $K$ is the threshold limit. Assuming the limits are between $0$ and $1$, we get $\frac{1}{1+e^{-x}}$ which is the sigmoid function.
What are the differences between Logistic Function and Sigmoid Function?
The logistic function is: $$ f(x) = \frac{K}{1+Ce^{-rx}} $$ where $C$ is the constant from integration, $r$ is the proportionality constant, and $K$ is the threshold limit. Assuming the limits are bet
What are the differences between Logistic Function and Sigmoid Function? The logistic function is: $$ f(x) = \frac{K}{1+Ce^{-rx}} $$ where $C$ is the constant from integration, $r$ is the proportionality constant, and $K$ is the threshold limit. Assuming the limits are between $0$ and $1$, we get $\frac{1}{1+e^{-x}}$ which is the sigmoid function.
What are the differences between Logistic Function and Sigmoid Function? The logistic function is: $$ f(x) = \frac{K}{1+Ce^{-rx}} $$ where $C$ is the constant from integration, $r$ is the proportionality constant, and $K$ is the threshold limit. Assuming the limits are bet
8,031
What are the differences between Logistic Function and Sigmoid Function?
I would like to say in opposite way to the answer "the sigmoid function is a special case of the Logistic function" into "The Logisitic function is a special case of the sigmoid function". All S shape curved monotonically increasing fuction being confined a and b are sigmoid functions.
What are the differences between Logistic Function and Sigmoid Function?
I would like to say in opposite way to the answer "the sigmoid function is a special case of the Logistic function" into "The Logisitic function is a special case of the sigmoid function". All S shape
What are the differences between Logistic Function and Sigmoid Function? I would like to say in opposite way to the answer "the sigmoid function is a special case of the Logistic function" into "The Logisitic function is a special case of the sigmoid function". All S shape curved monotonically increasing fuction being confined a and b are sigmoid functions.
What are the differences between Logistic Function and Sigmoid Function? I would like to say in opposite way to the answer "the sigmoid function is a special case of the Logistic function" into "The Logisitic function is a special case of the sigmoid function". All S shape
8,032
Differences between a statistical model and a probability model?
A Probability Model consists of the triplet $(\Omega,{\mathcal F},{\mathbb P})$, where $\Omega$ is the sample space, ${\mathcal F}$ is a $\sigma$−algebra (events) and ${\mathbb P}$ is a probability measure on ${\mathcal F}$. Intuitive explanation. A probability model can be interpreted as a known random variable $X$. For example, let $X$ be a Normally distributed random variable with mean $0$ and variance $1$. In this case the probability measure ${\mathbb P}$ is associated with the Cumulative Distribution Function (CDF) $F$ through $$F(x)={\mathbb P}(X\leq x) = {\mathbb P}(\omega\in\Omega:X(\omega)\leq x) =\int_{-\infty}^x \dfrac{1}{\sqrt{2\pi}}\exp\left({-\dfrac{t^2}{2}}\right)dt.$$ Generalisations. The definition of Probability Model depends on the mathematical definition of probability, see for example Free probability and Quantum probability. A Statistical Model is a set ${\mathcal S}$ of probability models, this is, a set of probability measures/distributions on the sample space $\Omega$. This set of probability distributions is usually selected for modelling a certain phenomenon from which we have data. Intuitive explanation. In a Statistical Model, the parameters and the distribution that describe a certain phenomenon are both unknown. An example of this is the familiy of Normal distributions with mean $\mu\in{\mathbb R}$ and variance $\sigma^2\in{\mathbb R_+}$, this is, both parameters are unknown and you typically want to use the data set for estimating the parameters (i.e. selecting an element of ${\mathcal S}$). This set of distributions can be chosen on any $\Omega$ and ${\mathcal F}$, but, if I am not mistaken, in a real example only those defined on the same pair $(\Omega,{\mathcal F})$ are reasonable to consider. Generalisations. This paper provides a very formal definition of Statistical Model, but the author mentions that "Bayesian model requires an additional component in the form of a prior distribution ... Although Bayesian formulations are not the primary focus of this paper". Therefore the definition of Statistical Model depend on the kind of model we use: parametric or nonparametric. Also in the parametric setting, the definition depends on how parameters are treated (e.g. Classical vs. Bayesian). The difference is: in a probability model you know exactly the probability measure, for example a $\mbox{Normal}(\mu_0,\sigma_0^2)$, where $\mu_0,\sigma_0^2$ are known parameters., while in a statistical model you consider sets of distributions, for example $\mbox{Normal}(\mu,\sigma^2)$, where $\mu,\sigma^2$ are unknown parameters. None of them require a data set, but I would say that a Statistical model is usually selected for modelling one.
Differences between a statistical model and a probability model?
A Probability Model consists of the triplet $(\Omega,{\mathcal F},{\mathbb P})$, where $\Omega$ is the sample space, ${\mathcal F}$ is a $\sigma$−algebra (events) and ${\mathbb P}$ is a probability me
Differences between a statistical model and a probability model? A Probability Model consists of the triplet $(\Omega,{\mathcal F},{\mathbb P})$, where $\Omega$ is the sample space, ${\mathcal F}$ is a $\sigma$−algebra (events) and ${\mathbb P}$ is a probability measure on ${\mathcal F}$. Intuitive explanation. A probability model can be interpreted as a known random variable $X$. For example, let $X$ be a Normally distributed random variable with mean $0$ and variance $1$. In this case the probability measure ${\mathbb P}$ is associated with the Cumulative Distribution Function (CDF) $F$ through $$F(x)={\mathbb P}(X\leq x) = {\mathbb P}(\omega\in\Omega:X(\omega)\leq x) =\int_{-\infty}^x \dfrac{1}{\sqrt{2\pi}}\exp\left({-\dfrac{t^2}{2}}\right)dt.$$ Generalisations. The definition of Probability Model depends on the mathematical definition of probability, see for example Free probability and Quantum probability. A Statistical Model is a set ${\mathcal S}$ of probability models, this is, a set of probability measures/distributions on the sample space $\Omega$. This set of probability distributions is usually selected for modelling a certain phenomenon from which we have data. Intuitive explanation. In a Statistical Model, the parameters and the distribution that describe a certain phenomenon are both unknown. An example of this is the familiy of Normal distributions with mean $\mu\in{\mathbb R}$ and variance $\sigma^2\in{\mathbb R_+}$, this is, both parameters are unknown and you typically want to use the data set for estimating the parameters (i.e. selecting an element of ${\mathcal S}$). This set of distributions can be chosen on any $\Omega$ and ${\mathcal F}$, but, if I am not mistaken, in a real example only those defined on the same pair $(\Omega,{\mathcal F})$ are reasonable to consider. Generalisations. This paper provides a very formal definition of Statistical Model, but the author mentions that "Bayesian model requires an additional component in the form of a prior distribution ... Although Bayesian formulations are not the primary focus of this paper". Therefore the definition of Statistical Model depend on the kind of model we use: parametric or nonparametric. Also in the parametric setting, the definition depends on how parameters are treated (e.g. Classical vs. Bayesian). The difference is: in a probability model you know exactly the probability measure, for example a $\mbox{Normal}(\mu_0,\sigma_0^2)$, where $\mu_0,\sigma_0^2$ are known parameters., while in a statistical model you consider sets of distributions, for example $\mbox{Normal}(\mu,\sigma^2)$, where $\mu,\sigma^2$ are unknown parameters. None of them require a data set, but I would say that a Statistical model is usually selected for modelling one.
Differences between a statistical model and a probability model? A Probability Model consists of the triplet $(\Omega,{\mathcal F},{\mathbb P})$, where $\Omega$ is the sample space, ${\mathcal F}$ is a $\sigma$−algebra (events) and ${\mathbb P}$ is a probability me
8,033
Can PCA be applied for time series data?
One approach could be to take the first time differences of your 12 variables to ensure stationarity. Then calculate the $12\times12$ covariance matrix and perform PCA on it. This will be some sort of average PCA over the whole time span, and will not say anything about how the different timelags affect each other. But it could be a good starting point. If you are interested in decomposing the time domain as well I would check out SSA as suggested in the comments. When you series are (assumed) stationary a single covariance matrix is meaningful. If your data is integrated of an order of 1 or higher, as I suspect they might be, the estimation of a single covariance matrix will not yield consistent results. A random walk is for example integrated of order 1, and the estimated covariance of two random walks does not say anything about their co-movement, here co-integration analysis is required. As suggested in the comments PCA in itself doesn't care about stationarity so you can feed PCA any positive semi-definite matrix and the PC decomposition will be fine in a PCA-sense. But if your estimated covariance matrix does not represent anything meaningful about the data, then PCA will, of course, not either.
Can PCA be applied for time series data?
One approach could be to take the first time differences of your 12 variables to ensure stationarity. Then calculate the $12\times12$ covariance matrix and perform PCA on it. This will be some sort of
Can PCA be applied for time series data? One approach could be to take the first time differences of your 12 variables to ensure stationarity. Then calculate the $12\times12$ covariance matrix and perform PCA on it. This will be some sort of average PCA over the whole time span, and will not say anything about how the different timelags affect each other. But it could be a good starting point. If you are interested in decomposing the time domain as well I would check out SSA as suggested in the comments. When you series are (assumed) stationary a single covariance matrix is meaningful. If your data is integrated of an order of 1 or higher, as I suspect they might be, the estimation of a single covariance matrix will not yield consistent results. A random walk is for example integrated of order 1, and the estimated covariance of two random walks does not say anything about their co-movement, here co-integration analysis is required. As suggested in the comments PCA in itself doesn't care about stationarity so you can feed PCA any positive semi-definite matrix and the PC decomposition will be fine in a PCA-sense. But if your estimated covariance matrix does not represent anything meaningful about the data, then PCA will, of course, not either.
Can PCA be applied for time series data? One approach could be to take the first time differences of your 12 variables to ensure stationarity. Then calculate the $12\times12$ covariance matrix and perform PCA on it. This will be some sort of
8,034
Can PCA be applied for time series data?
Yes, PCA on time series is performed all the time in financial engineering (quantitative finance) and neurology. In financial engineering, the data matrix is constructed with assets (e.g., stocks) in columns which represent the features, and the rows representing e.g. days (or objects) for end-of-day trading. Thus, the the data matrix $\underset{t \times p}{\bf X}$ has $t$ rows and $p$ columns. However, note that log-returns, $r_t=\log(P_t) - \log(P_{t-1}) = \log(P_t/P_{t-1})$, are used since daily prices are log-normally distributed -- i.e., skewed with right tails. Since there are 250 trading days/year, it's appropriate to fetch 1000 days of data which represents 4 years of trading. Since the same unit (e.g. USD) is usually used for daily log-price returns, the $p \times p$ covariance matrix for features is used for eigendecomposition. Otherwise, if different currencies are used, the correlation matrix is used for eigendecomposition, since correlation mean-zero standardizes the columns of $\bf{X}$. When done running PCA on assets, you can look at which stocks load on which PCs, a sort of clustering approach, or use the PC scores for input into other analyses. PCA is also run on the $t \times t$ covariance matrix for days, with assets in rows, in order to collapse days that correlate together into a single PC, since the general idea is that days can be redundant -- and when feeding data into e.g. a neural network, you don't want data rows to be redundant or features to be correlated (you want them to be orthogonal), since a neural net will waste time on learning the correlation. This approach does not focus on autocorrelation, however. In quantitative finance, there is also a large interest in first finding the noise cutoff in eigenvalues of the covariance (correlation) matrix for many assets in order to improve (Markowitzian) portfolio optimization, since you want a portfolio that sits on the "efficient frontier" with assets that are uncorrelated. This approach exploits the Marcenko-Pastur law and the ratio $\gamma=t/n$ of the data matrix $\bf{X}$ for fitting the eigenvalue density, and finding the noise cuttoff known as $\lambda^+$, above which eigenvalues represent the signal, and below which eigenvalues represent noise. Once the noise eigenvalues are identified, the new dataset is based on (multivariate) regression of the original data on the PC scores representing the noise eigenvectors, $\mathbf{Y}=\mathbf{F}_n \beta$, and the residuals are then used as the denoised dataset, i.e., $\hat{\bf{X}}=\bf{Y}-\hat{\bf{Y}}$. Wealth values (cumulative return) from portfolios constructed using weights derived from the new dataset (residuals) have been shown to be much greater than without using this approach. Last, there's also a basic method to remove the "market effect" or widespread correlation among stock returns by regressing the asset data on the first PC representing the major (greatest) eigenvalue, $\mathbf{Y}=\mathbf{f}_1 \beta$, and pulling back the residuals to represent the new data, which will have the widespread market correlation removed. (since the first PC always represents stocks with high multicollinearity). This approach addresses market sentiment hinged to "herd-mentality." In neurology, PCA is run on time-series for action potentials in different wavelength bands obtained from an EEG. Transforming the action potentials into orthogonal (uncorrelated) PC score vectors and inputting the PCs into other analyses is the primary means by which statistical power was increased in statistical genetic modelling of complex traits for behavioral genetics (since phenotypes for e.g. bi-polar, novelty-seeking, schizotypal, schozephrenia often overlap). The large Australian genetic twin studies were instrumental in parsing out these overlapping traits in behavioral genetics, because if there are disease differences among identical twins which are reared together (grow up in the same household), causal inference may point to exposure in different environments when they were older instead of their identical genetics. (identical twins "share 100% of their genes all the time").
Can PCA be applied for time series data?
Yes, PCA on time series is performed all the time in financial engineering (quantitative finance) and neurology. In financial engineering, the data matrix is constructed with assets (e.g., stocks) i
Can PCA be applied for time series data? Yes, PCA on time series is performed all the time in financial engineering (quantitative finance) and neurology. In financial engineering, the data matrix is constructed with assets (e.g., stocks) in columns which represent the features, and the rows representing e.g. days (or objects) for end-of-day trading. Thus, the the data matrix $\underset{t \times p}{\bf X}$ has $t$ rows and $p$ columns. However, note that log-returns, $r_t=\log(P_t) - \log(P_{t-1}) = \log(P_t/P_{t-1})$, are used since daily prices are log-normally distributed -- i.e., skewed with right tails. Since there are 250 trading days/year, it's appropriate to fetch 1000 days of data which represents 4 years of trading. Since the same unit (e.g. USD) is usually used for daily log-price returns, the $p \times p$ covariance matrix for features is used for eigendecomposition. Otherwise, if different currencies are used, the correlation matrix is used for eigendecomposition, since correlation mean-zero standardizes the columns of $\bf{X}$. When done running PCA on assets, you can look at which stocks load on which PCs, a sort of clustering approach, or use the PC scores for input into other analyses. PCA is also run on the $t \times t$ covariance matrix for days, with assets in rows, in order to collapse days that correlate together into a single PC, since the general idea is that days can be redundant -- and when feeding data into e.g. a neural network, you don't want data rows to be redundant or features to be correlated (you want them to be orthogonal), since a neural net will waste time on learning the correlation. This approach does not focus on autocorrelation, however. In quantitative finance, there is also a large interest in first finding the noise cutoff in eigenvalues of the covariance (correlation) matrix for many assets in order to improve (Markowitzian) portfolio optimization, since you want a portfolio that sits on the "efficient frontier" with assets that are uncorrelated. This approach exploits the Marcenko-Pastur law and the ratio $\gamma=t/n$ of the data matrix $\bf{X}$ for fitting the eigenvalue density, and finding the noise cuttoff known as $\lambda^+$, above which eigenvalues represent the signal, and below which eigenvalues represent noise. Once the noise eigenvalues are identified, the new dataset is based on (multivariate) regression of the original data on the PC scores representing the noise eigenvectors, $\mathbf{Y}=\mathbf{F}_n \beta$, and the residuals are then used as the denoised dataset, i.e., $\hat{\bf{X}}=\bf{Y}-\hat{\bf{Y}}$. Wealth values (cumulative return) from portfolios constructed using weights derived from the new dataset (residuals) have been shown to be much greater than without using this approach. Last, there's also a basic method to remove the "market effect" or widespread correlation among stock returns by regressing the asset data on the first PC representing the major (greatest) eigenvalue, $\mathbf{Y}=\mathbf{f}_1 \beta$, and pulling back the residuals to represent the new data, which will have the widespread market correlation removed. (since the first PC always represents stocks with high multicollinearity). This approach addresses market sentiment hinged to "herd-mentality." In neurology, PCA is run on time-series for action potentials in different wavelength bands obtained from an EEG. Transforming the action potentials into orthogonal (uncorrelated) PC score vectors and inputting the PCs into other analyses is the primary means by which statistical power was increased in statistical genetic modelling of complex traits for behavioral genetics (since phenotypes for e.g. bi-polar, novelty-seeking, schizotypal, schozephrenia often overlap). The large Australian genetic twin studies were instrumental in parsing out these overlapping traits in behavioral genetics, because if there are disease differences among identical twins which are reared together (grow up in the same household), causal inference may point to exposure in different environments when they were older instead of their identical genetics. (identical twins "share 100% of their genes all the time").
Can PCA be applied for time series data? Yes, PCA on time series is performed all the time in financial engineering (quantitative finance) and neurology. In financial engineering, the data matrix is constructed with assets (e.g., stocks) i
8,035
How many stickers do I need to complete my FIFA Panini album?
That is a beautiful Coupon Collector's Problem, with a little twist introduced by the fact that stickers come in packs of 5. If the stickers were bought individually the result are known, as you can see here. All the estimates for a 90% upper bound for individually-bought stickers are also upper bounds for the problem with a pack of 5, but a less close upper bound. I think that getting a better 90%-probability upper bound, using the pack of 5 dependence, would get a lot more difficult and would not give you a far better result. So, using the tail estimate $ P[T>\beta n \log n] \leq n^{-\beta+1}$ with $n=424$ and $n^{-\beta+1} = 0.1$, you'll get to a good answer. EDIT: The article "The collector’s problem with group drawings" (Wolfgang Stadje), a reference of the article brought by Assuranceturix, presents an exact analytical solution for the Coupon Collector's Problem with "sticker packs". Before writing the theorem, some notation definitions: $S$ would be the set of all possible stickers, $s = |S|$. $A \subset S$ would be the subset that interests you (in the OP, $A = S$), and $l = |A|$. We're going to draw, with replacement, $k$ random subsets of $m$ different stickers. $X_{k}(A)$ will be the number of elements of $A$ that appear in at least one of those subsets. The theorem says that: $$ P(X_{k}(A) = n) = {l \choose n} \sum_{j=0}^{n}(-1)^j {n \choose j}\left[\frac{s+n-l-j \choose m}{s \choose m}\right]^k $$ So, for the OP we have $ l=s=n=424$ and $m=5$. I did some tries with values of $k$ near the estimate for the classical coupon collector's problem (729 packs) and I got a probability of 90.02% for k equals to 700. So it was not so far from the upper bound :)
How many stickers do I need to complete my FIFA Panini album?
That is a beautiful Coupon Collector's Problem, with a little twist introduced by the fact that stickers come in packs of 5. If the stickers were bought individually the result are known, as you can s
How many stickers do I need to complete my FIFA Panini album? That is a beautiful Coupon Collector's Problem, with a little twist introduced by the fact that stickers come in packs of 5. If the stickers were bought individually the result are known, as you can see here. All the estimates for a 90% upper bound for individually-bought stickers are also upper bounds for the problem with a pack of 5, but a less close upper bound. I think that getting a better 90%-probability upper bound, using the pack of 5 dependence, would get a lot more difficult and would not give you a far better result. So, using the tail estimate $ P[T>\beta n \log n] \leq n^{-\beta+1}$ with $n=424$ and $n^{-\beta+1} = 0.1$, you'll get to a good answer. EDIT: The article "The collector’s problem with group drawings" (Wolfgang Stadje), a reference of the article brought by Assuranceturix, presents an exact analytical solution for the Coupon Collector's Problem with "sticker packs". Before writing the theorem, some notation definitions: $S$ would be the set of all possible stickers, $s = |S|$. $A \subset S$ would be the subset that interests you (in the OP, $A = S$), and $l = |A|$. We're going to draw, with replacement, $k$ random subsets of $m$ different stickers. $X_{k}(A)$ will be the number of elements of $A$ that appear in at least one of those subsets. The theorem says that: $$ P(X_{k}(A) = n) = {l \choose n} \sum_{j=0}^{n}(-1)^j {n \choose j}\left[\frac{s+n-l-j \choose m}{s \choose m}\right]^k $$ So, for the OP we have $ l=s=n=424$ and $m=5$. I did some tries with values of $k$ near the estimate for the classical coupon collector's problem (729 packs) and I got a probability of 90.02% for k equals to 700. So it was not so far from the upper bound :)
How many stickers do I need to complete my FIFA Panini album? That is a beautiful Coupon Collector's Problem, with a little twist introduced by the fact that stickers come in packs of 5. If the stickers were bought individually the result are known, as you can s
8,036
How many stickers do I need to complete my FIFA Panini album?
The other day I came across a paper that addresses a closely related question: http://www.unige.ch/math/folks/velenik/Vulg/Paninimania.pdf If I have understood it correctly, the expected number of packs you would need to buy would be: $\binom{424}{5}\sum_{j=1}^{424}\left(-1\right)^{j+1}\frac{\binom{424}{j}}{\binom{424}{5}-\binom{424-j}{5}}$ However, as eqperes points out in the comments, the specific question the OP asks is actually covered in detail in another paper that is not open access. Their final conclusion suggests the following strategy (for an album of 660 stickers): Buy a box of 100 packs of 5 stickers (500 stickers, guaranteed to be all different) Buy 40 more packs of 5 stickers and swap the duplicates until you have at most 50 missing stickers. Purchase the remaining stickers directly from Panini (these cost approx. 1.5 times as much). This is total of 140 packs + upto 15 extra packs worth of stickers (by cost) purchased in a targeted fashion, equivalent to at most 155 packs.
How many stickers do I need to complete my FIFA Panini album?
The other day I came across a paper that addresses a closely related question: http://www.unige.ch/math/folks/velenik/Vulg/Paninimania.pdf If I have understood it correctly, the expected number of pac
How many stickers do I need to complete my FIFA Panini album? The other day I came across a paper that addresses a closely related question: http://www.unige.ch/math/folks/velenik/Vulg/Paninimania.pdf If I have understood it correctly, the expected number of packs you would need to buy would be: $\binom{424}{5}\sum_{j=1}^{424}\left(-1\right)^{j+1}\frac{\binom{424}{j}}{\binom{424}{5}-\binom{424-j}{5}}$ However, as eqperes points out in the comments, the specific question the OP asks is actually covered in detail in another paper that is not open access. Their final conclusion suggests the following strategy (for an album of 660 stickers): Buy a box of 100 packs of 5 stickers (500 stickers, guaranteed to be all different) Buy 40 more packs of 5 stickers and swap the duplicates until you have at most 50 missing stickers. Purchase the remaining stickers directly from Panini (these cost approx. 1.5 times as much). This is total of 140 packs + upto 15 extra packs worth of stickers (by cost) purchased in a targeted fashion, equivalent to at most 155 packs.
How many stickers do I need to complete my FIFA Panini album? The other day I came across a paper that addresses a closely related question: http://www.unige.ch/math/folks/velenik/Vulg/Paninimania.pdf If I have understood it correctly, the expected number of pac
8,037
Interpretation of mean absolute scaled error (MASE)
In the linked blog post, Rob Hyndman calls for entries to a tourism forecasting competition. Essentially, the blog post serves to draw attention to the relevant IJF article, an ungated version of which is linked to in the blog post. The benchmarks you refer to - 1.38 for monthly, 1.43 for quarterly and 2.28 for yearly data - were apparently arrived at as follows. The authors (all of them are expert forecasters and very active in the IIF - no snake oil salesmen here) are quite capable of applying standard forecasting algorithms or forecasting software, and they are probably not interested in simple ARIMA submission. So they went and applied some standard methods to their data. For the winning submission to be invited for a paper in the IJF, they ask that it improve on the best of these standard methods, as measured by the MASE. So your question essentially boils down to: Given that a MASE of 1 corresponds to a forecast that is out-of-sample as good (by MAD) as the naive random walk forecast in-sample, why can't standard forecasting methods like ARIMA improve on 1.38 for monthly data? Here, the 1.38 MASE comes from Table 4 in the ungated version. It is the average ASE over 1-24 month ahead forecasts from ARIMA. The other standard methods, like ForecastPro, ETS etc. perform even worse. And here, the answer gets hard. It is always very problematic to judge forecast accuracy without considering the data. One possibility I could think of in this particular case could be accelerating trends. Suppose that you try to forecast $\exp(t)$ with standard methods. None of these will capture the accelerating trend (and this is usually a Good Thing - if your forecasting algorithm often models an accelerating trend, you will likely far overshoot your mark), and they will yield a MASE that is above 1. Other explanations could, as you say, be different structural breaks, e.g., level shifts or external influences like SARS or 9/11, which would not be captured by the non-causal benchmark models, but which could be modeled by dedicated tourism forecasting methods (although using future causals in a holdout sample is a kind of cheating). So I'd say that you likely can't say a lot about this withough looking at the data themselves. They are available on Kaggle. Your best bet is likely to take these 518 series, hold out the last 24 months, fit ARIMA series, calculate MASEs, dig out the ten or twenty MASE-worst forecast series, get a big pot of coffee, look at these series and try to figure out what it is that makes ARIMA models so bad at forecasting them. EDIT: another point that appears obvious after the fact but took me five days to see - remember that the denominator of the MASE is the one-step ahead in-sample random walk forecast, whereas the numerator is the average of the 1-24-step ahead forecasts. It's not too surprising that forecasts deteriorate with increasing horizons, so this may be another reason for a MASE of 1.38. Note that the Seasonal Naive forecast was also included in the benchmark and had an even higher MASE.
Interpretation of mean absolute scaled error (MASE)
In the linked blog post, Rob Hyndman calls for entries to a tourism forecasting competition. Essentially, the blog post serves to draw attention to the relevant IJF article, an ungated version of whic
Interpretation of mean absolute scaled error (MASE) In the linked blog post, Rob Hyndman calls for entries to a tourism forecasting competition. Essentially, the blog post serves to draw attention to the relevant IJF article, an ungated version of which is linked to in the blog post. The benchmarks you refer to - 1.38 for monthly, 1.43 for quarterly and 2.28 for yearly data - were apparently arrived at as follows. The authors (all of them are expert forecasters and very active in the IIF - no snake oil salesmen here) are quite capable of applying standard forecasting algorithms or forecasting software, and they are probably not interested in simple ARIMA submission. So they went and applied some standard methods to their data. For the winning submission to be invited for a paper in the IJF, they ask that it improve on the best of these standard methods, as measured by the MASE. So your question essentially boils down to: Given that a MASE of 1 corresponds to a forecast that is out-of-sample as good (by MAD) as the naive random walk forecast in-sample, why can't standard forecasting methods like ARIMA improve on 1.38 for monthly data? Here, the 1.38 MASE comes from Table 4 in the ungated version. It is the average ASE over 1-24 month ahead forecasts from ARIMA. The other standard methods, like ForecastPro, ETS etc. perform even worse. And here, the answer gets hard. It is always very problematic to judge forecast accuracy without considering the data. One possibility I could think of in this particular case could be accelerating trends. Suppose that you try to forecast $\exp(t)$ with standard methods. None of these will capture the accelerating trend (and this is usually a Good Thing - if your forecasting algorithm often models an accelerating trend, you will likely far overshoot your mark), and they will yield a MASE that is above 1. Other explanations could, as you say, be different structural breaks, e.g., level shifts or external influences like SARS or 9/11, which would not be captured by the non-causal benchmark models, but which could be modeled by dedicated tourism forecasting methods (although using future causals in a holdout sample is a kind of cheating). So I'd say that you likely can't say a lot about this withough looking at the data themselves. They are available on Kaggle. Your best bet is likely to take these 518 series, hold out the last 24 months, fit ARIMA series, calculate MASEs, dig out the ten or twenty MASE-worst forecast series, get a big pot of coffee, look at these series and try to figure out what it is that makes ARIMA models so bad at forecasting them. EDIT: another point that appears obvious after the fact but took me five days to see - remember that the denominator of the MASE is the one-step ahead in-sample random walk forecast, whereas the numerator is the average of the 1-24-step ahead forecasts. It's not too surprising that forecasts deteriorate with increasing horizons, so this may be another reason for a MASE of 1.38. Note that the Seasonal Naive forecast was also included in the benchmark and had an even higher MASE.
Interpretation of mean absolute scaled error (MASE) In the linked blog post, Rob Hyndman calls for entries to a tourism forecasting competition. Essentially, the blog post serves to draw attention to the relevant IJF article, an ungated version of whic
8,038
Interpretation of mean absolute scaled error (MASE)
Not an answer, but a plot following Stephan Kolassa's call to "look at these series". Kaggle tourism1 has 518 yearly time series, for which we want to predict the last 4 values: The plot shows the errors from the "naive" constant predictor, here $5^{th}$ last: $\qquad Error4( y ) \equiv {1 \over 4} \sum_ {last\ 4} |y_i - y_{-5}| $ The numbers in the corners, 81 12 ..., are $Error4(y)$ as % of range, and $length(y)$. The 3 rows are the 10 worst, 10 in the middle, and 10 best of all 518 yearly time series. Obviously, very short series -- 12 11 7 7 7 ... in the top row -- are hard to predict: no surprise. (Athanasopoulos, Hyndman, Song and Wu, The Tourism Forecasting Competition (2011, 23p) used 112 of the 518 yearly series, but I don't see which ones.) Are there other, newer collections of time series since 2010, that might be worth looking at ?
Interpretation of mean absolute scaled error (MASE)
Not an answer, but a plot following Stephan Kolassa's call to "look at these series". Kaggle tourism1 has 518 yearly time series, for which we want to predict the last 4 values: The plot shows the er
Interpretation of mean absolute scaled error (MASE) Not an answer, but a plot following Stephan Kolassa's call to "look at these series". Kaggle tourism1 has 518 yearly time series, for which we want to predict the last 4 values: The plot shows the errors from the "naive" constant predictor, here $5^{th}$ last: $\qquad Error4( y ) \equiv {1 \over 4} \sum_ {last\ 4} |y_i - y_{-5}| $ The numbers in the corners, 81 12 ..., are $Error4(y)$ as % of range, and $length(y)$. The 3 rows are the 10 worst, 10 in the middle, and 10 best of all 518 yearly time series. Obviously, very short series -- 12 11 7 7 7 ... in the top row -- are hard to predict: no surprise. (Athanasopoulos, Hyndman, Song and Wu, The Tourism Forecasting Competition (2011, 23p) used 112 of the 518 yearly series, but I don't see which ones.) Are there other, newer collections of time series since 2010, that might be worth looking at ?
Interpretation of mean absolute scaled error (MASE) Not an answer, but a plot following Stephan Kolassa's call to "look at these series". Kaggle tourism1 has 518 yearly time series, for which we want to predict the last 4 values: The plot shows the er
8,039
(Why) Has Kohonen-style SOM fallen out of favor?
I think you are on to something by noting the influence of what the machine learning currently touts as the 'best' algorithms for dimensionality reduction. While t-SNE has shown its efficacy in competitions, such as the Merck Viz Challenge, I personally have had success implementing SOM for both feature extraction and binary classification. While there are certainly some who dismiss SOMs without justification besides the algorithm's age (check out this discussion, there are also a number of articles that have been published within the last few years that implemented SOMs and achieved positive results (see Mortazavi et al., 2013; Frenkel et al., 2013 for instance). A Google Scholar search will reveal that SOMs are still utilized within a number of application domains. As a general rule, however, the best algorithm for a particular task is exactly that - the best algorithm for a particular task. Where a random forest may have worked well for a particular binary classification task, it may perform horribly on another. The same applies to clustering, regression, and optimization tasks. This phenomenon is tied to the No Free Lunch Theorem, but that is a topic for another discussion. In sum, if SOM works best for you on a particular task, that is the algorithm you should use for that task, regardless of what's popular.
(Why) Has Kohonen-style SOM fallen out of favor?
I think you are on to something by noting the influence of what the machine learning currently touts as the 'best' algorithms for dimensionality reduction. While t-SNE has shown its efficacy in compet
(Why) Has Kohonen-style SOM fallen out of favor? I think you are on to something by noting the influence of what the machine learning currently touts as the 'best' algorithms for dimensionality reduction. While t-SNE has shown its efficacy in competitions, such as the Merck Viz Challenge, I personally have had success implementing SOM for both feature extraction and binary classification. While there are certainly some who dismiss SOMs without justification besides the algorithm's age (check out this discussion, there are also a number of articles that have been published within the last few years that implemented SOMs and achieved positive results (see Mortazavi et al., 2013; Frenkel et al., 2013 for instance). A Google Scholar search will reveal that SOMs are still utilized within a number of application domains. As a general rule, however, the best algorithm for a particular task is exactly that - the best algorithm for a particular task. Where a random forest may have worked well for a particular binary classification task, it may perform horribly on another. The same applies to clustering, regression, and optimization tasks. This phenomenon is tied to the No Free Lunch Theorem, but that is a topic for another discussion. In sum, if SOM works best for you on a particular task, that is the algorithm you should use for that task, regardless of what's popular.
(Why) Has Kohonen-style SOM fallen out of favor? I think you are on to something by noting the influence of what the machine learning currently touts as the 'best' algorithms for dimensionality reduction. While t-SNE has shown its efficacy in compet
8,040
(Why) Has Kohonen-style SOM fallen out of favor?
I have done research on comparing SOMs with t-SNE and more and also proposed an improvement on SOM that takes it to a new level of efficiency. Please check it out here and let me know your feedback. Would love to get some idea on what people think about it and if it is worth publishing in python for people to use. IEEE link to paper: http://ieeexplore.ieee.org/document/6178802/ Matlab implementation. https://www.mathworks.com/matlabcentral/fileexchange/35538-cluster-reinforcement--cr--phase Thanks for your feedback.
(Why) Has Kohonen-style SOM fallen out of favor?
I have done research on comparing SOMs with t-SNE and more and also proposed an improvement on SOM that takes it to a new level of efficiency. Please check it out here and let me know your feedback. W
(Why) Has Kohonen-style SOM fallen out of favor? I have done research on comparing SOMs with t-SNE and more and also proposed an improvement on SOM that takes it to a new level of efficiency. Please check it out here and let me know your feedback. Would love to get some idea on what people think about it and if it is worth publishing in python for people to use. IEEE link to paper: http://ieeexplore.ieee.org/document/6178802/ Matlab implementation. https://www.mathworks.com/matlabcentral/fileexchange/35538-cluster-reinforcement--cr--phase Thanks for your feedback.
(Why) Has Kohonen-style SOM fallen out of favor? I have done research on comparing SOMs with t-SNE and more and also proposed an improvement on SOM that takes it to a new level of efficiency. Please check it out here and let me know your feedback. W
8,041
(Why) Has Kohonen-style SOM fallen out of favor?
My subjective view is that SOMs are less well known and perceived as being less 'sexy' than many other methods, but are still highly relevant for certain classes of problems. It may well be the case that they would have a significant contribution to make if they were more widely used. They are invaluable in the early stages of exploratory data science for getting a feel for the 'landscape' or 'topology' of multivariate data. The development of libraries such as Somoclu, and research such as that by Guénaël Cabanes (among many others) shows that SOMs are still relevant.
(Why) Has Kohonen-style SOM fallen out of favor?
My subjective view is that SOMs are less well known and perceived as being less 'sexy' than many other methods, but are still highly relevant for certain classes of problems. It may well be the case t
(Why) Has Kohonen-style SOM fallen out of favor? My subjective view is that SOMs are less well known and perceived as being less 'sexy' than many other methods, but are still highly relevant for certain classes of problems. It may well be the case that they would have a significant contribution to make if they were more widely used. They are invaluable in the early stages of exploratory data science for getting a feel for the 'landscape' or 'topology' of multivariate data. The development of libraries such as Somoclu, and research such as that by Guénaël Cabanes (among many others) shows that SOMs are still relevant.
(Why) Has Kohonen-style SOM fallen out of favor? My subjective view is that SOMs are less well known and perceived as being less 'sexy' than many other methods, but are still highly relevant for certain classes of problems. It may well be the case t
8,042
How to build the final model and tune probability threshold after nested cross-validation?
Nested cross validation explained without nesting Here's how I see (nested) cross validation and model building. Note that I'm chemist and like you look from the application side to the model building process (see below). My main point here is from my point of view I don't need a dedicated nested variety of cross validation. I need a validation method (e.g. cross validation) and a model training function: model = f (training data) "my" model training function f does not need any hyperparameters because it internally does all hyperparameter tuning (e.g. your alpha, lambda and threshold). In other words, my training function may contain any number of inner cross validations (or out-of-bag or what ever performance estimate I may deem useful). However, note that the distinction between parameters and hyper-parameters typically is that the hyperparameters need to be tuned to the data set/application at hand whereas the parameters can then be fitted regardless of what data it is. Thus from the point of view of the developer of a new classification algorithm, it does make sense to provide only the "naked" fitting function (g (training data, hyperparameters)) that fits the parameters if given data and hyperparameters. The point of having the "outer" training function f is that after you did your cross validation run, it gives you straightforward way to train "on the whole data set": just use f (whole data set) instead of the call f (cv split training data) for the cross validation surrogate models. Thus in your example, you'll have 5+1 calls to f, and each of the calls to f will have e.g. 100 * 5 calls to g. probability threshold While you could do this with yet another cross validation, this is not necessary: it is just one more hyperparameter your ready-to-use model has and can be estimated inside f. What you need to fix it is a heuristic that allows you to calculate such a threshold. There's a wide variety of heuristics (from ROC and specifying how important it is to avoid false positives compared to false negatives over minimum acceptable sensitivity or specificity or PPV or NPV to allowing two thresholds and thus an "uncertain" (NA) level and so on) that are suitable in different situations - good heuristics are usually very application specific. But for the question here, you can do this inside f and e.g. using the predictions obtained during the inner cross validation to calculate ROC and then find your working point/threshold accordingly. Specific Comments to parts of the question I understand that I shouldn't report the performance from the CV used to pick the optimal hyperparameters as an estimate of the expected performance of my final model (which would be overly-optimistic) but should instead include an outer CV loop to get this estimate. Yes. (Though the inner estimate does carry information in relation to the outer estimate: if it is much more optimisitc than the outer estimate, you are typically overfitting.) I understand that the inner CV loop is used for model selection Any kind of data-driven model tuning, really -> that includes tuning your cutoff-threshold. (in this case, the optimal hyperparameters) and that the outer loop is used for model evaluation, i.e., the inner and outer CV serve two different purposes that often are erroneously conflated. Yes. That is, the hyperparameter tuning is part of "the method for building the model". I prefer to see it this way as well: I'm chemist and like you look from the application side: for me a trained/fitted model is not complete without the hyperparameters, or more precisely, a model is something I can use directly to obtain predictions. Though as you note other people have a different view (without hyperparameter tuning). In my experience, this is often the case with people developing new models: hyperparameter tuning is then a "solved problem" and not considered. (side note: their view on what cross validation can do in terms of validation is also slightly different from what cross validation can do from the application side).
How to build the final model and tune probability threshold after nested cross-validation?
Nested cross validation explained without nesting Here's how I see (nested) cross validation and model building. Note that I'm chemist and like you look from the application side to the model building
How to build the final model and tune probability threshold after nested cross-validation? Nested cross validation explained without nesting Here's how I see (nested) cross validation and model building. Note that I'm chemist and like you look from the application side to the model building process (see below). My main point here is from my point of view I don't need a dedicated nested variety of cross validation. I need a validation method (e.g. cross validation) and a model training function: model = f (training data) "my" model training function f does not need any hyperparameters because it internally does all hyperparameter tuning (e.g. your alpha, lambda and threshold). In other words, my training function may contain any number of inner cross validations (or out-of-bag or what ever performance estimate I may deem useful). However, note that the distinction between parameters and hyper-parameters typically is that the hyperparameters need to be tuned to the data set/application at hand whereas the parameters can then be fitted regardless of what data it is. Thus from the point of view of the developer of a new classification algorithm, it does make sense to provide only the "naked" fitting function (g (training data, hyperparameters)) that fits the parameters if given data and hyperparameters. The point of having the "outer" training function f is that after you did your cross validation run, it gives you straightforward way to train "on the whole data set": just use f (whole data set) instead of the call f (cv split training data) for the cross validation surrogate models. Thus in your example, you'll have 5+1 calls to f, and each of the calls to f will have e.g. 100 * 5 calls to g. probability threshold While you could do this with yet another cross validation, this is not necessary: it is just one more hyperparameter your ready-to-use model has and can be estimated inside f. What you need to fix it is a heuristic that allows you to calculate such a threshold. There's a wide variety of heuristics (from ROC and specifying how important it is to avoid false positives compared to false negatives over minimum acceptable sensitivity or specificity or PPV or NPV to allowing two thresholds and thus an "uncertain" (NA) level and so on) that are suitable in different situations - good heuristics are usually very application specific. But for the question here, you can do this inside f and e.g. using the predictions obtained during the inner cross validation to calculate ROC and then find your working point/threshold accordingly. Specific Comments to parts of the question I understand that I shouldn't report the performance from the CV used to pick the optimal hyperparameters as an estimate of the expected performance of my final model (which would be overly-optimistic) but should instead include an outer CV loop to get this estimate. Yes. (Though the inner estimate does carry information in relation to the outer estimate: if it is much more optimisitc than the outer estimate, you are typically overfitting.) I understand that the inner CV loop is used for model selection Any kind of data-driven model tuning, really -> that includes tuning your cutoff-threshold. (in this case, the optimal hyperparameters) and that the outer loop is used for model evaluation, i.e., the inner and outer CV serve two different purposes that often are erroneously conflated. Yes. That is, the hyperparameter tuning is part of "the method for building the model". I prefer to see it this way as well: I'm chemist and like you look from the application side: for me a trained/fitted model is not complete without the hyperparameters, or more precisely, a model is something I can use directly to obtain predictions. Though as you note other people have a different view (without hyperparameter tuning). In my experience, this is often the case with people developing new models: hyperparameter tuning is then a "solved problem" and not considered. (side note: their view on what cross validation can do in terms of validation is also slightly different from what cross validation can do from the application side).
How to build the final model and tune probability threshold after nested cross-validation? Nested cross validation explained without nesting Here's how I see (nested) cross validation and model building. Note that I'm chemist and like you look from the application side to the model building
8,043
How to build the final model and tune probability threshold after nested cross-validation?
So, first of all, this is an answer based on the one by @cbeleites above, this one here, and the question itself (all these contributions helped me understand). There is nothing original in it, and although it makes sense to me, I am still a student in this topic so I am not 100% sure of it. Therefore, any feedback is appreciated. However, it gives a specific example, so I think it might be useful to understand the concepts expressed above. We as model builders need to deliver a model and an estimate of its performance. Similarly, if we buy an amperometer, together with the object we are sold a manual containing its specifications. I assume that SVM with C parameter in the range [0.1,10] is a good model and that accuracy is a good performance measure for the case at hand. We want to select the best C parameter between: 0.1, 1, 10. These are the steps and their interpretation: 1- We first implement nested cross-validation on all data (see here for an example using sklearn). Aim of this step is to estimate the performance/accuracy of my final model (which I haven't fitted yet), or rather of my fitting procedure. Call it $a$. $a$ will be an average over outer folds of the accuracy from different but equivalent models (i.e. models having different parameters and hyperparameter C in {0.1, 1, 10}) 1b- before proceeding further I should check model stability and that I am not overfitting, as explained here. 2- I now implement cross-validation on all data to determine the best C. I assume I get C=$c$. 3- I finally use all data to fit my SVM with C=$c$. This is the best model which I can achieve so I deliver it. I will also tell my customer that it will have an accuracy of $~a$, which I have estimated in step 1.
How to build the final model and tune probability threshold after nested cross-validation?
So, first of all, this is an answer based on the one by @cbeleites above, this one here, and the question itself (all these contributions helped me understand). There is nothing original in it, and al
How to build the final model and tune probability threshold after nested cross-validation? So, first of all, this is an answer based on the one by @cbeleites above, this one here, and the question itself (all these contributions helped me understand). There is nothing original in it, and although it makes sense to me, I am still a student in this topic so I am not 100% sure of it. Therefore, any feedback is appreciated. However, it gives a specific example, so I think it might be useful to understand the concepts expressed above. We as model builders need to deliver a model and an estimate of its performance. Similarly, if we buy an amperometer, together with the object we are sold a manual containing its specifications. I assume that SVM with C parameter in the range [0.1,10] is a good model and that accuracy is a good performance measure for the case at hand. We want to select the best C parameter between: 0.1, 1, 10. These are the steps and their interpretation: 1- We first implement nested cross-validation on all data (see here for an example using sklearn). Aim of this step is to estimate the performance/accuracy of my final model (which I haven't fitted yet), or rather of my fitting procedure. Call it $a$. $a$ will be an average over outer folds of the accuracy from different but equivalent models (i.e. models having different parameters and hyperparameter C in {0.1, 1, 10}) 1b- before proceeding further I should check model stability and that I am not overfitting, as explained here. 2- I now implement cross-validation on all data to determine the best C. I assume I get C=$c$. 3- I finally use all data to fit my SVM with C=$c$. This is the best model which I can achieve so I deliver it. I will also tell my customer that it will have an accuracy of $~a$, which I have estimated in step 1.
How to build the final model and tune probability threshold after nested cross-validation? So, first of all, this is an answer based on the one by @cbeleites above, this one here, and the question itself (all these contributions helped me understand). There is nothing original in it, and al
8,044
How to build the final model and tune probability threshold after nested cross-validation?
I think your following understanding is reasonable: Now, the links I've posted suggest that "the way to think of cross-validation is as estimating the performance obtained using a method for building a model, rather than for estimating the performance of a model". Given that, how should I interpret the results of the nested CV procedure? The nested CV does the validation of the model building process. (It doesn't validate the final model.) Not only the nested CV, but the plain CV without a nested loop also does the same thing. The plain cv validates the model building process but doesn't give out the final model for prediction. We need to construct the final model using the whole data after the plain cv. If we consider the nested cv in the equal footing as the plain CV, we need to refit the final model with the whole data after the nested cross-validation process. But the hyperparameter parts complicate the matter. We better not do both of the hyperparameter tuning and the model parameter optimization to the same data to avoid the selection bias. What I suggest is that we divide the whole training data into two parts for the final model fit: final-training sample and final validation sample. We fit the model in the final training sample and select the hyperparameter with the final validation sample. And we consider the selected model as our final model. And we test our final model on the test set and get the prediction we want.
How to build the final model and tune probability threshold after nested cross-validation?
I think your following understanding is reasonable: Now, the links I've posted suggest that "the way to think of cross-validation is as estimating the performance obtained using a method for bui
How to build the final model and tune probability threshold after nested cross-validation? I think your following understanding is reasonable: Now, the links I've posted suggest that "the way to think of cross-validation is as estimating the performance obtained using a method for building a model, rather than for estimating the performance of a model". Given that, how should I interpret the results of the nested CV procedure? The nested CV does the validation of the model building process. (It doesn't validate the final model.) Not only the nested CV, but the plain CV without a nested loop also does the same thing. The plain cv validates the model building process but doesn't give out the final model for prediction. We need to construct the final model using the whole data after the plain cv. If we consider the nested cv in the equal footing as the plain CV, we need to refit the final model with the whole data after the nested cross-validation process. But the hyperparameter parts complicate the matter. We better not do both of the hyperparameter tuning and the model parameter optimization to the same data to avoid the selection bias. What I suggest is that we divide the whole training data into two parts for the final model fit: final-training sample and final validation sample. We fit the model in the final training sample and select the hyperparameter with the final validation sample. And we consider the selected model as our final model. And we test our final model on the test set and get the prediction we want.
How to build the final model and tune probability threshold after nested cross-validation? I think your following understanding is reasonable: Now, the links I've posted suggest that "the way to think of cross-validation is as estimating the performance obtained using a method for bui
8,045
Should final (production ready) model be trained on complete data or just on training set?
You will almost always get a better model after refitting on the whole sample. But as others have said you have no validation. This is a fundamental flaw in the data splitting approach. Not only is data splitting a lost opportunity to directly model sample differences in an overall model, but it is unstable unless your whole sample is perhaps larger than 15,000 subjects. This is why 100 repeats of 10-fold cross-validation is necessary (depending on the sample size) to achieve precision and stability, and why the bootstrap for strong internal validation is even better. The bootstrap also exposes how difficult and arbitrary is the task of feature selection. I have described the problems with 'external' validation in more detail in BBR Chapter 10.
Should final (production ready) model be trained on complete data or just on training set?
You will almost always get a better model after refitting on the whole sample. But as others have said you have no validation. This is a fundamental flaw in the data splitting approach. Not only is
Should final (production ready) model be trained on complete data or just on training set? You will almost always get a better model after refitting on the whole sample. But as others have said you have no validation. This is a fundamental flaw in the data splitting approach. Not only is data splitting a lost opportunity to directly model sample differences in an overall model, but it is unstable unless your whole sample is perhaps larger than 15,000 subjects. This is why 100 repeats of 10-fold cross-validation is necessary (depending on the sample size) to achieve precision and stability, and why the bootstrap for strong internal validation is even better. The bootstrap also exposes how difficult and arbitrary is the task of feature selection. I have described the problems with 'external' validation in more detail in BBR Chapter 10.
Should final (production ready) model be trained on complete data or just on training set? You will almost always get a better model after refitting on the whole sample. But as others have said you have no validation. This is a fundamental flaw in the data splitting approach. Not only is
8,046
Should final (production ready) model be trained on complete data or just on training set?
Unless you're limiting yourself to a simple class of convex models/loss functions, you're considerably better off keeping a final test split. Here's why: Let's say you collect iid sample pairs from your data generating distribution, some set of (x, y). You then split this up into a training and test set, and train a model on the training set. Out of that training process you get a model instance, f(x; w). Where w denotes the model parameters. Let's say you have N observations in the test set. When you validate this model on that test set you form the set of test predictions, {f(x_i, w) : i=1,2,...,N} and compare it to the set of test labels {y_i : i=1,2,...,N} using a performance metric. What you're able to say using N independent observations is how you expect that model instance, i.e. the function given a specific w, will generalize to other iid data from the same distribution. Importantly, you only really have one observation (that w you found) to comment on your process for determining f(x, w), i.e. the training process. You can say a little more using something like k-fold cross validation, but unless you're willing to do exhaustive cross-validation (which is not really feasible in a computer vision or NLP context), you'll always have less data on the reliability of your training process. Take a pathological example, where you draw the model parameters at random, and you don't train them at all. You obtain some model instance f(x, w_a). Despite the absurdity of your (lack of) training process, your test set performance is still indicative of how that model instance will generalize to unseen data. Those N observations are still perfectly valid to use. Maybe you'll have gotten lucky and have landed on a pretty good w_a. However, if you combine the test and training set, then "retrain" the model to obtaining a w_b, you're in trouble. The results of your previous test performance amounts to basically a point estimate of how well your next random parameter draw will fare. There are statistical results that you can use to comment on the reliability of the entire training process. But they require some assumptions about your model class, loss function, and your ability to find the best f(x, w) from within that class for any given set of training observation. With all that, you can get some bounds on the probability that your performance on unseen data will deviate by more that a certain amount from what you measured on the training data. However, those results do not carry over (in a useful way) to overparameterized and non-convex models like modern neural networks. The pathological example above is a little over the top. But as an ML researcher and consultant, I have seen neural network training pipelines that occasionally latch on to terrible local minima, but otherwise perform great. Without a final test split, you'd have no way of being sure that hadn't happened on your final retraining. More generally, in a modern machine learning context, you cannot treat the models coming out of your training process as interchangeable. Even if they do perform similarly on a validation set. In fact, you may see considerable variation from one model to the next when using the full bag of stochastic optimization tricks. (For more details on that, check out this work on underspecification.)
Should final (production ready) model be trained on complete data or just on training set?
Unless you're limiting yourself to a simple class of convex models/loss functions, you're considerably better off keeping a final test split. Here's why: Let's say you collect iid sample pairs from yo
Should final (production ready) model be trained on complete data or just on training set? Unless you're limiting yourself to a simple class of convex models/loss functions, you're considerably better off keeping a final test split. Here's why: Let's say you collect iid sample pairs from your data generating distribution, some set of (x, y). You then split this up into a training and test set, and train a model on the training set. Out of that training process you get a model instance, f(x; w). Where w denotes the model parameters. Let's say you have N observations in the test set. When you validate this model on that test set you form the set of test predictions, {f(x_i, w) : i=1,2,...,N} and compare it to the set of test labels {y_i : i=1,2,...,N} using a performance metric. What you're able to say using N independent observations is how you expect that model instance, i.e. the function given a specific w, will generalize to other iid data from the same distribution. Importantly, you only really have one observation (that w you found) to comment on your process for determining f(x, w), i.e. the training process. You can say a little more using something like k-fold cross validation, but unless you're willing to do exhaustive cross-validation (which is not really feasible in a computer vision or NLP context), you'll always have less data on the reliability of your training process. Take a pathological example, where you draw the model parameters at random, and you don't train them at all. You obtain some model instance f(x, w_a). Despite the absurdity of your (lack of) training process, your test set performance is still indicative of how that model instance will generalize to unseen data. Those N observations are still perfectly valid to use. Maybe you'll have gotten lucky and have landed on a pretty good w_a. However, if you combine the test and training set, then "retrain" the model to obtaining a w_b, you're in trouble. The results of your previous test performance amounts to basically a point estimate of how well your next random parameter draw will fare. There are statistical results that you can use to comment on the reliability of the entire training process. But they require some assumptions about your model class, loss function, and your ability to find the best f(x, w) from within that class for any given set of training observation. With all that, you can get some bounds on the probability that your performance on unseen data will deviate by more that a certain amount from what you measured on the training data. However, those results do not carry over (in a useful way) to overparameterized and non-convex models like modern neural networks. The pathological example above is a little over the top. But as an ML researcher and consultant, I have seen neural network training pipelines that occasionally latch on to terrible local minima, but otherwise perform great. Without a final test split, you'd have no way of being sure that hadn't happened on your final retraining. More generally, in a modern machine learning context, you cannot treat the models coming out of your training process as interchangeable. Even if they do perform similarly on a validation set. In fact, you may see considerable variation from one model to the next when using the full bag of stochastic optimization tricks. (For more details on that, check out this work on underspecification.)
Should final (production ready) model be trained on complete data or just on training set? Unless you're limiting yourself to a simple class of convex models/loss functions, you're considerably better off keeping a final test split. Here's why: Let's say you collect iid sample pairs from yo
8,047
Should final (production ready) model be trained on complete data or just on training set?
You dont need to re-train again. When you report your results, you always report test data results because they give much better understanding. By test data set we can more accurately see how well a model is likely to perform on out-of-sample data.
Should final (production ready) model be trained on complete data or just on training set?
You dont need to re-train again. When you report your results, you always report test data results because they give much better understanding. By test data set we can more accurately see how well a m
Should final (production ready) model be trained on complete data or just on training set? You dont need to re-train again. When you report your results, you always report test data results because they give much better understanding. By test data set we can more accurately see how well a model is likely to perform on out-of-sample data.
Should final (production ready) model be trained on complete data or just on training set? You dont need to re-train again. When you report your results, you always report test data results because they give much better understanding. By test data set we can more accurately see how well a m
8,048
lme and lmer comparison
UPDATE JUNE 2016: Please see Ben's blog entry describing his current thoughts on accomplishing this in lme4: Braindump 01 June 2016 If you prefer Bayesian methods, the brms package's brm supports some correlation structures: CRAN brms page. (Note especially: "As of brms version 0.6.0, the AR structure refers to autoregressive effects of residuals to match the naming and implementation in other packages such as nlme. Previously, the AR term in brms referred to autoregressive effects of the response. The latter are now named ARR effects and can be modeled using argument r in the cor_arma and cor_arr functions.") ORIGINAL ANSWER JULY 2013: (Converted from a comment.) I would say lmer would be pretty good with a random effect of year and a random effect of customer (let's say you only have one measurement per customer per year); lmer(y~1 + (1|year) + (1|customer), ...) would fit the (intercept-only) model $$ Y_{ij} \sim \text{Normal}(a + \epsilon_{\text{year},i} + \epsilon_{\text{customer},j}, \sigma^2_0) $$ where $\epsilon_{\text{year}}$ and $\epsilon_{\text{customer}}$ are zero-mean Normal variates with their own specific variances. This is a pretty boring model, you might want to add an overall (fixed-effect) trend of time and also consider a random time-by-customer interaction (i.e. random slopes). I think lmer(y~year + (1|year) + (year|customer), ...) should fit the model $$ Y_{ij} \sim \text{Normal}((a + \epsilon_{\text{customer},j}) + (b + \epsilon_{\text{year} \times \text{customer},j}) \cdot \text{year} + \epsilon_{\text{year},i}, \sigma^2_0) $$ (using year in this way is an exception to the usual rule of not including an input variable as both a fitted and a random effect in the same model; provided it's a numeric variable, year gets treated as continuous in the fixed effect and the year:customer (random) interaction and as categorical in the random effect ...) Of course you might want to add year-level, customer-level, and observation-level covariates which would soak up some of the relevant variance (e.g. add average consumer price index to explain why years were bad or good ...) Ideally you would also want to allow for temporal autocorrelation within each customer's time series, which is at the moment not possible with lmer, but you could check the temporal autocorrelation function to see if that was important ... Caveat: I don't know that much about standard approaches for handling panel data; this is based just on my knowledge of mixed models. Commenters (or editors) should feel free to chime in if this seems to violate standard/best practices in econometrics.
lme and lmer comparison
UPDATE JUNE 2016: Please see Ben's blog entry describing his current thoughts on accomplishing this in lme4: Braindump 01 June 2016 If you prefer Bayesian methods, the brms package's brm supports some
lme and lmer comparison UPDATE JUNE 2016: Please see Ben's blog entry describing his current thoughts on accomplishing this in lme4: Braindump 01 June 2016 If you prefer Bayesian methods, the brms package's brm supports some correlation structures: CRAN brms page. (Note especially: "As of brms version 0.6.0, the AR structure refers to autoregressive effects of residuals to match the naming and implementation in other packages such as nlme. Previously, the AR term in brms referred to autoregressive effects of the response. The latter are now named ARR effects and can be modeled using argument r in the cor_arma and cor_arr functions.") ORIGINAL ANSWER JULY 2013: (Converted from a comment.) I would say lmer would be pretty good with a random effect of year and a random effect of customer (let's say you only have one measurement per customer per year); lmer(y~1 + (1|year) + (1|customer), ...) would fit the (intercept-only) model $$ Y_{ij} \sim \text{Normal}(a + \epsilon_{\text{year},i} + \epsilon_{\text{customer},j}, \sigma^2_0) $$ where $\epsilon_{\text{year}}$ and $\epsilon_{\text{customer}}$ are zero-mean Normal variates with their own specific variances. This is a pretty boring model, you might want to add an overall (fixed-effect) trend of time and also consider a random time-by-customer interaction (i.e. random slopes). I think lmer(y~year + (1|year) + (year|customer), ...) should fit the model $$ Y_{ij} \sim \text{Normal}((a + \epsilon_{\text{customer},j}) + (b + \epsilon_{\text{year} \times \text{customer},j}) \cdot \text{year} + \epsilon_{\text{year},i}, \sigma^2_0) $$ (using year in this way is an exception to the usual rule of not including an input variable as both a fitted and a random effect in the same model; provided it's a numeric variable, year gets treated as continuous in the fixed effect and the year:customer (random) interaction and as categorical in the random effect ...) Of course you might want to add year-level, customer-level, and observation-level covariates which would soak up some of the relevant variance (e.g. add average consumer price index to explain why years were bad or good ...) Ideally you would also want to allow for temporal autocorrelation within each customer's time series, which is at the moment not possible with lmer, but you could check the temporal autocorrelation function to see if that was important ... Caveat: I don't know that much about standard approaches for handling panel data; this is based just on my knowledge of mixed models. Commenters (or editors) should feel free to chime in if this seems to violate standard/best practices in econometrics.
lme and lmer comparison UPDATE JUNE 2016: Please see Ben's blog entry describing his current thoughts on accomplishing this in lme4: Braindump 01 June 2016 If you prefer Bayesian methods, the brms package's brm supports some
8,049
lme and lmer comparison
To answer your questions directly, and NB this is years after the original post! Yep there are still correlation structures that nlme handles which lme4 will not handle. However, for as long as nlme allows the user to define general corstrs and lme4 does not, this will be the case. This has surprisingly little practical impact. The "big three" correlation structures of: Independent, Exchangeable, and AR-1 correlation structures are easy handled by both packages. It's certainly possible. You can fit panel data with the lm function too! My recommendation about which to use depends on the problem. lme4 is a much smaller tool kit, and the formula representation is a neat, concise way of depicting some very common mixed effects models. nlme is the very large tool box, including a TIG welder to make any tools you need. You say you want to allow for "variation over time". Essentially, an exchangeable correlation structure achieves this, allowing for a random intercept in each cluster, so that the intracluster variance is the sum of cluster level variation as well as (what you call) variation over time. And this by no means deters you from using fixed effects to obtain more precise predictions over time.
lme and lmer comparison
To answer your questions directly, and NB this is years after the original post! Yep there are still correlation structures that nlme handles which lme4 will not handle. However, for as long as nlme
lme and lmer comparison To answer your questions directly, and NB this is years after the original post! Yep there are still correlation structures that nlme handles which lme4 will not handle. However, for as long as nlme allows the user to define general corstrs and lme4 does not, this will be the case. This has surprisingly little practical impact. The "big three" correlation structures of: Independent, Exchangeable, and AR-1 correlation structures are easy handled by both packages. It's certainly possible. You can fit panel data with the lm function too! My recommendation about which to use depends on the problem. lme4 is a much smaller tool kit, and the formula representation is a neat, concise way of depicting some very common mixed effects models. nlme is the very large tool box, including a TIG welder to make any tools you need. You say you want to allow for "variation over time". Essentially, an exchangeable correlation structure achieves this, allowing for a random intercept in each cluster, so that the intracluster variance is the sum of cluster level variation as well as (what you call) variation over time. And this by no means deters you from using fixed effects to obtain more precise predictions over time.
lme and lmer comparison To answer your questions directly, and NB this is years after the original post! Yep there are still correlation structures that nlme handles which lme4 will not handle. However, for as long as nlme
8,050
Difference between Hidden Markov models and Particle Filter (and Kalman Filter)
It will be helpful to distinguish the model from inference you want to make with it, because now standard terminology mixes the two. The model is the part where you specify the nature of: the hidden space (discrete or continuous), the hidden state dynamics (linear or non-linear) the nature of the observations (typically conditionally multinomial or Normal), and the measurement model connecting the hidden state to the observations. HMMs and state space models are two such sets of model specifications. For any such model there are three standard tasks: filtering, smoothing, and prediction. Any time series text (or indeed google) should give you an idea of what they are. Your question is about filtering, which is a way to get a) a posterior distribution over (or 'best' estimate of, for some sense of best, if you're not feeling Bayesian) the hidden state at $t$ given the complete set of of data up to and including time $t$, and relatedly b) the probability of the data under the model. In situations where the state is continuous, the state dynamics and measurement linear and all noise is Normal, a Kalman Filter will do that job efficiently. Its analogue when the state is discrete is the Forward Algorithm. In the case where there is non-Normality and/or non-linearity, we fall back to approximate filters. There are deterministic approximations, e.g. an Extended or Unscented Kalman Filters, and there are stochastic approximations, the best known of which being the Particle Filter. The general feeling seems to be that in the presence of unavoidable non-linearity in the state or measurement parts or non-Normality in the observations (the common problem situations), one tries to get away with the cheapest approximation possible. So, EKF then UKF then PF. The literature on the Unscented Kalman filter usually has some comparisons of situations when it might work better than the traditional linearization of the Extended Kalman Filter. The Particle Filter has almost complete generality - any non-linearity, any distributions - but it has in my experience required quite careful tuning and is generally much more unwieldy than the others. In many situations however, it's the only option. As for further reading: I like ch.4-7 of Särkkä's Bayesian Filtering and Smoothing, though it's quite terse. The author makes has an online copy available for personal use. Otherwise, most state space time series books will cover this material. For Particle Filtering, there's a Doucet et al. volume on the topic, but I guess it's quite old now. Perhaps others will point out a newer reference.
Difference between Hidden Markov models and Particle Filter (and Kalman Filter)
It will be helpful to distinguish the model from inference you want to make with it, because now standard terminology mixes the two. The model is the part where you specify the nature of: the hidden
Difference between Hidden Markov models and Particle Filter (and Kalman Filter) It will be helpful to distinguish the model from inference you want to make with it, because now standard terminology mixes the two. The model is the part where you specify the nature of: the hidden space (discrete or continuous), the hidden state dynamics (linear or non-linear) the nature of the observations (typically conditionally multinomial or Normal), and the measurement model connecting the hidden state to the observations. HMMs and state space models are two such sets of model specifications. For any such model there are three standard tasks: filtering, smoothing, and prediction. Any time series text (or indeed google) should give you an idea of what they are. Your question is about filtering, which is a way to get a) a posterior distribution over (or 'best' estimate of, for some sense of best, if you're not feeling Bayesian) the hidden state at $t$ given the complete set of of data up to and including time $t$, and relatedly b) the probability of the data under the model. In situations where the state is continuous, the state dynamics and measurement linear and all noise is Normal, a Kalman Filter will do that job efficiently. Its analogue when the state is discrete is the Forward Algorithm. In the case where there is non-Normality and/or non-linearity, we fall back to approximate filters. There are deterministic approximations, e.g. an Extended or Unscented Kalman Filters, and there are stochastic approximations, the best known of which being the Particle Filter. The general feeling seems to be that in the presence of unavoidable non-linearity in the state or measurement parts or non-Normality in the observations (the common problem situations), one tries to get away with the cheapest approximation possible. So, EKF then UKF then PF. The literature on the Unscented Kalman filter usually has some comparisons of situations when it might work better than the traditional linearization of the Extended Kalman Filter. The Particle Filter has almost complete generality - any non-linearity, any distributions - but it has in my experience required quite careful tuning and is generally much more unwieldy than the others. In many situations however, it's the only option. As for further reading: I like ch.4-7 of Särkkä's Bayesian Filtering and Smoothing, though it's quite terse. The author makes has an online copy available for personal use. Otherwise, most state space time series books will cover this material. For Particle Filtering, there's a Doucet et al. volume on the topic, but I guess it's quite old now. Perhaps others will point out a newer reference.
Difference between Hidden Markov models and Particle Filter (and Kalman Filter) It will be helpful to distinguish the model from inference you want to make with it, because now standard terminology mixes the two. The model is the part where you specify the nature of: the hidden
8,051
Why do smaller weights result in simpler models in regularization?
If you use regularization you're not only minimizing the in-sample error but $OutOfSampleError \le InSampleError + ModelComplexityPenalty$. More precisely, $J_{aug}(h(x),y,\lambda,\Omega)=J(h(x),y)+\frac{\lambda}{2m}\Omega$ for a hypothesis $h \in H$, where $\lambda$ is some parameter, usually $\lambda \in (0,1)$, $m$ is the number of examples in your dataset, and $\Omega$ is some penalty that is dependent on the weights $w$, $\Omega=w^Tw$. This is known as the augmented error. Now, you can only minimize the function above if the weights are rather small. Here is some R code to toy with w <- c(0.1,0.2,0.3) out <- t(w) %*% w print(out) So, instead of penalizing the whole hypothesis space $H$, we penalize each hypothesis $h$ individually. We sometimes refer to the hypothesis $h$ by its weight vector $w$. As for why small weights go along with low model complexitity, let's look at the following hypothesis: $h_1(x)=x_1 \times w_1 + x_2 \times w_2 + x_3 \times w_3$. In total we got three active weight parameters ${w_1,\dotsc,w_3}$. Now, let's set $w_3$ to a very very small value, $w_3=0$. This reduces the model's complexity to: $h_1(x)=x_1 \times w_1 + x_2 \times w_2$. Instead of three active weight parameters we only got two remaining.
Why do smaller weights result in simpler models in regularization?
If you use regularization you're not only minimizing the in-sample error but $OutOfSampleError \le InSampleError + ModelComplexityPenalty$. More precisely, $J_{aug}(h(x),y,\lambda,\Omega)=J(h(x),y)+\
Why do smaller weights result in simpler models in regularization? If you use regularization you're not only minimizing the in-sample error but $OutOfSampleError \le InSampleError + ModelComplexityPenalty$. More precisely, $J_{aug}(h(x),y,\lambda,\Omega)=J(h(x),y)+\frac{\lambda}{2m}\Omega$ for a hypothesis $h \in H$, where $\lambda$ is some parameter, usually $\lambda \in (0,1)$, $m$ is the number of examples in your dataset, and $\Omega$ is some penalty that is dependent on the weights $w$, $\Omega=w^Tw$. This is known as the augmented error. Now, you can only minimize the function above if the weights are rather small. Here is some R code to toy with w <- c(0.1,0.2,0.3) out <- t(w) %*% w print(out) So, instead of penalizing the whole hypothesis space $H$, we penalize each hypothesis $h$ individually. We sometimes refer to the hypothesis $h$ by its weight vector $w$. As for why small weights go along with low model complexitity, let's look at the following hypothesis: $h_1(x)=x_1 \times w_1 + x_2 \times w_2 + x_3 \times w_3$. In total we got three active weight parameters ${w_1,\dotsc,w_3}$. Now, let's set $w_3$ to a very very small value, $w_3=0$. This reduces the model's complexity to: $h_1(x)=x_1 \times w_1 + x_2 \times w_2$. Instead of three active weight parameters we only got two remaining.
Why do smaller weights result in simpler models in regularization? If you use regularization you're not only minimizing the in-sample error but $OutOfSampleError \le InSampleError + ModelComplexityPenalty$. More precisely, $J_{aug}(h(x),y,\lambda,\Omega)=J(h(x),y)+\
8,052
Why do smaller weights result in simpler models in regularization?
I'm not sure if I really know what I'm talking about but I'll give it a shot. It isn't so much having small weights that prevents overfitting (I think), it is more the fact that regularizing more strongly reduces the model space. In fact you can regularize around 10000000 if you wanted to by taking the L2 norm of your X values minus a vector of 10000000s. This would also reduce overfitting (of course you should also have some rationale behind doing that (ie perhaps your Y values are 10000000 times bigger than the sum of your X values, but no one really does that because you can just rescale data). Bias and variance are both a function of model complexity. This is related to VC theory so look at that. The larger the space of possible models (ie values all your parameters can take basically) the more likely the model will overfit. If your model can do everything from being a straight line to wiggling in every direction like a sine wave that can also go up and down, it's much more likely to pick up and model random perturbations in your data that isn't a result of the underlying signal but the result of just lucky chance in that data set (this is why getting more data helps overfitting but not underfitting). When you regularize, basically you are reducing the model space. This doesn't necessarily mean smoother/flatter functions have higher bias and less variance. Think of a linear model that is overlaid with a sine wave that is restricted to have a really small amplitude oscillations that basically does nothing (its basically a fuzzy line). This function is super wiggly in a sense but only overfits slightly more than a linear regression. The reason why smoother/flatter functions tend to have higher bias and less variance is because we as data scientist assume that if we have a reduced sample space we would much rather by occam's razor keep the models that are smoother and simpler and throw out the models that are wiggly and oscillating all over the place. It makes sense to throw out wiggly models first, which is why smoother models tend to be more prone to underfitting and not overfitting. Regularization like ridge regression, reduces the model space because it makes it more expensive to be further away from zero (or any number). Thus when the model is faced with a choice of taking into account a small perturbation in your data, it will more likely err on the side of not, because that will (generally) increase your parameter value. If that perturbation is due to random chance (ie one of your x variables just had a slight random correlation with your y variables) the model will not take that into account versus a non-regularized regression because the non regularized regression has no cost associated with increasing beta sizes. However, if that perturbation is due to real signal, your regularized regression will more likely miss it which is why it has higher bias (and why there is a variance bias tradeoff).
Why do smaller weights result in simpler models in regularization?
I'm not sure if I really know what I'm talking about but I'll give it a shot. It isn't so much having small weights that prevents overfitting (I think), it is more the fact that regularizing more str
Why do smaller weights result in simpler models in regularization? I'm not sure if I really know what I'm talking about but I'll give it a shot. It isn't so much having small weights that prevents overfitting (I think), it is more the fact that regularizing more strongly reduces the model space. In fact you can regularize around 10000000 if you wanted to by taking the L2 norm of your X values minus a vector of 10000000s. This would also reduce overfitting (of course you should also have some rationale behind doing that (ie perhaps your Y values are 10000000 times bigger than the sum of your X values, but no one really does that because you can just rescale data). Bias and variance are both a function of model complexity. This is related to VC theory so look at that. The larger the space of possible models (ie values all your parameters can take basically) the more likely the model will overfit. If your model can do everything from being a straight line to wiggling in every direction like a sine wave that can also go up and down, it's much more likely to pick up and model random perturbations in your data that isn't a result of the underlying signal but the result of just lucky chance in that data set (this is why getting more data helps overfitting but not underfitting). When you regularize, basically you are reducing the model space. This doesn't necessarily mean smoother/flatter functions have higher bias and less variance. Think of a linear model that is overlaid with a sine wave that is restricted to have a really small amplitude oscillations that basically does nothing (its basically a fuzzy line). This function is super wiggly in a sense but only overfits slightly more than a linear regression. The reason why smoother/flatter functions tend to have higher bias and less variance is because we as data scientist assume that if we have a reduced sample space we would much rather by occam's razor keep the models that are smoother and simpler and throw out the models that are wiggly and oscillating all over the place. It makes sense to throw out wiggly models first, which is why smoother models tend to be more prone to underfitting and not overfitting. Regularization like ridge regression, reduces the model space because it makes it more expensive to be further away from zero (or any number). Thus when the model is faced with a choice of taking into account a small perturbation in your data, it will more likely err on the side of not, because that will (generally) increase your parameter value. If that perturbation is due to random chance (ie one of your x variables just had a slight random correlation with your y variables) the model will not take that into account versus a non-regularized regression because the non regularized regression has no cost associated with increasing beta sizes. However, if that perturbation is due to real signal, your regularized regression will more likely miss it which is why it has higher bias (and why there is a variance bias tradeoff).
Why do smaller weights result in simpler models in regularization? I'm not sure if I really know what I'm talking about but I'll give it a shot. It isn't so much having small weights that prevents overfitting (I think), it is more the fact that regularizing more str
8,053
Why do smaller weights result in simpler models in regularization?
Story: My grandma walks, but doesn't climb. Some grandmas do. One grandma was famous for climbing Kilimanjaro. That dormant volcano is big. It is 16,000 feet above its base. (Don't hate my imperial units.) It also has glaciers on the top, sometimes. If you climb on a year where there is no glacier, and you get to the top, is it the same top as if there was a glacier? The altitude is different. The path you have to take is different. What if you go to the top when the glacier thickness is larger? Does that make it more of an accomplishment? About 35,000 people attempt to climb it every year, but only about 16,000 succeed. Application: So I would explain the control of weights (aka minimizing model complexity) to my grandma, as follows: Grandma, your brain is an amazing thinker whether or not you know it. If I ask you how many of the 16,000 who think they reached the top actually did so, you would say "all of them". If I put sensors in shoes of all the 30,000 climbers, and measured height above sea-level, then some of those folks didn't get as high as others, and might not qualify. When I do that I am going to a constant model - I am saying if height is not equal to some percentile of measured max heights then it is not the top. Some people jump at the top. Some people just cross the line and sit down. I could add latitude and longitude to the sensor, and fit some higher order equations and maybe I could get a better fit, and have more folks in, maybe even exactly 45% of the total folks who attempt it. So let's say next year is a "big glacier" year or a "no glacier" year because some volcano really transforms the albedo of the earth. If I take my complex and exacting model from this year and apply it to the folks who climb next year the model is going to have strange results. Maybe everyone will "pass" or even be too high to pass. Maybe nobody at all will pass, and it will think nobody actually completed the climb. Especially when the model is complex it will tend to not generalize well. It may exactly fit this year's "training" data, but when new data comes it behaves poorly. Discussion: When you limit the complexity of the model, then you can usually have better generalization without over-fitting. Using simpler models, ones that are more built to accommodate real-world variation, tends to give better results, all else being equal. Now you have a fixed network topology, so you are saying "my parameter count is fixed" - I can't have variation in model complexity. Nonsense. Measure the entropy in the weights. When the entropy is higher it means some coefficients carry substantially more "informativeness" than others. If you have very low entropy it means that in general the coefficients carry similar levels of "informativeness". Informativeness is not necessarily a good thing. In a democracy you want all people to be equal, and things like George Orwell "more equal than others" is a measure of failures of the system. If you don't have a great reason for it, you want weights to be pretty similar to each other. On a personal note: instead of using voodoo or heuristics, I prefer things like "information criteria" because they allow me to get reliable and consistent results. AIC, AICc, and BIC are some common and useful starting points. Repeating the analysis to determine stability of the solution, or range of information criteria results is a common approach. One might look at putting a ceiling on the entropy in the weights.
Why do smaller weights result in simpler models in regularization?
Story: My grandma walks, but doesn't climb. Some grandmas do. One grandma was famous for climbing Kilimanjaro. That dormant volcano is big. It is 16,000 feet above its base. (Don't hate my imperial
Why do smaller weights result in simpler models in regularization? Story: My grandma walks, but doesn't climb. Some grandmas do. One grandma was famous for climbing Kilimanjaro. That dormant volcano is big. It is 16,000 feet above its base. (Don't hate my imperial units.) It also has glaciers on the top, sometimes. If you climb on a year where there is no glacier, and you get to the top, is it the same top as if there was a glacier? The altitude is different. The path you have to take is different. What if you go to the top when the glacier thickness is larger? Does that make it more of an accomplishment? About 35,000 people attempt to climb it every year, but only about 16,000 succeed. Application: So I would explain the control of weights (aka minimizing model complexity) to my grandma, as follows: Grandma, your brain is an amazing thinker whether or not you know it. If I ask you how many of the 16,000 who think they reached the top actually did so, you would say "all of them". If I put sensors in shoes of all the 30,000 climbers, and measured height above sea-level, then some of those folks didn't get as high as others, and might not qualify. When I do that I am going to a constant model - I am saying if height is not equal to some percentile of measured max heights then it is not the top. Some people jump at the top. Some people just cross the line and sit down. I could add latitude and longitude to the sensor, and fit some higher order equations and maybe I could get a better fit, and have more folks in, maybe even exactly 45% of the total folks who attempt it. So let's say next year is a "big glacier" year or a "no glacier" year because some volcano really transforms the albedo of the earth. If I take my complex and exacting model from this year and apply it to the folks who climb next year the model is going to have strange results. Maybe everyone will "pass" or even be too high to pass. Maybe nobody at all will pass, and it will think nobody actually completed the climb. Especially when the model is complex it will tend to not generalize well. It may exactly fit this year's "training" data, but when new data comes it behaves poorly. Discussion: When you limit the complexity of the model, then you can usually have better generalization without over-fitting. Using simpler models, ones that are more built to accommodate real-world variation, tends to give better results, all else being equal. Now you have a fixed network topology, so you are saying "my parameter count is fixed" - I can't have variation in model complexity. Nonsense. Measure the entropy in the weights. When the entropy is higher it means some coefficients carry substantially more "informativeness" than others. If you have very low entropy it means that in general the coefficients carry similar levels of "informativeness". Informativeness is not necessarily a good thing. In a democracy you want all people to be equal, and things like George Orwell "more equal than others" is a measure of failures of the system. If you don't have a great reason for it, you want weights to be pretty similar to each other. On a personal note: instead of using voodoo or heuristics, I prefer things like "information criteria" because they allow me to get reliable and consistent results. AIC, AICc, and BIC are some common and useful starting points. Repeating the analysis to determine stability of the solution, or range of information criteria results is a common approach. One might look at putting a ceiling on the entropy in the weights.
Why do smaller weights result in simpler models in regularization? Story: My grandma walks, but doesn't climb. Some grandmas do. One grandma was famous for climbing Kilimanjaro. That dormant volcano is big. It is 16,000 feet above its base. (Don't hate my imperial
8,054
Why do smaller weights result in simpler models in regularization?
A simple intuition is the following. Remember that for regularization the features should be standardized in order to have approx. the same scale. Let's say that the minimisation function is only the sums of squared errors: $SSE$ Adding more features will likely reduce this $SSE$, especially if the feature is selected from a noisy pool. The feature by chance reduces the $SSE$, leading to overfitting. Now consider regularization, LASSO in this case. The functions to be minimized is then $SSE + \lambda \Sigma |\beta|$ Adding an extra feature now results in an extra penalty: the sum of absolute coefficients gets larger! The reduction in SSE should outweigh the added extra penalty. It is no longer possible to add extra features without cost. The combination of feature standardization and penalizing the sum of the absolute coefficients restricts the search space, leading to less overfitting. Now LASSO: $SSE + \lambda \Sigma |\beta|$ tends to put coefficients to zero, while ridge regression: $SSE + \lambda \Sigma \beta^2$ tends to shrink coefficients proportionally. This can be seen as an side effect of the type of penalizing function. The picture below helps with this: The regularizing penalty function in practice gives a 'budget' for the parameters, as pictured above by the cyan area. See that on the left, LASSO, the $SSE$ function is likely to hit the space on an axis; setting one of the coefficients to zero, and depending on the budget shrinking the other. On the right the function can hit of the axes, more or less spreading the budget over the parameters: leading to shrinkage of both of the parameters. Picture taken from https://onlinecourses.science.psu.edu/stat857/node/158 Summarizing: regularization penalizes adding extra parameters, and depending on the type of regularization will shrink all coefficients (ridge), or will set a number of coefficients to 0 while maintaining the other coefficients as far as the budget allows (lasso)
Why do smaller weights result in simpler models in regularization?
A simple intuition is the following. Remember that for regularization the features should be standardized in order to have approx. the same scale. Let's say that the minimisation function is only the
Why do smaller weights result in simpler models in regularization? A simple intuition is the following. Remember that for regularization the features should be standardized in order to have approx. the same scale. Let's say that the minimisation function is only the sums of squared errors: $SSE$ Adding more features will likely reduce this $SSE$, especially if the feature is selected from a noisy pool. The feature by chance reduces the $SSE$, leading to overfitting. Now consider regularization, LASSO in this case. The functions to be minimized is then $SSE + \lambda \Sigma |\beta|$ Adding an extra feature now results in an extra penalty: the sum of absolute coefficients gets larger! The reduction in SSE should outweigh the added extra penalty. It is no longer possible to add extra features without cost. The combination of feature standardization and penalizing the sum of the absolute coefficients restricts the search space, leading to less overfitting. Now LASSO: $SSE + \lambda \Sigma |\beta|$ tends to put coefficients to zero, while ridge regression: $SSE + \lambda \Sigma \beta^2$ tends to shrink coefficients proportionally. This can be seen as an side effect of the type of penalizing function. The picture below helps with this: The regularizing penalty function in practice gives a 'budget' for the parameters, as pictured above by the cyan area. See that on the left, LASSO, the $SSE$ function is likely to hit the space on an axis; setting one of the coefficients to zero, and depending on the budget shrinking the other. On the right the function can hit of the axes, more or less spreading the budget over the parameters: leading to shrinkage of both of the parameters. Picture taken from https://onlinecourses.science.psu.edu/stat857/node/158 Summarizing: regularization penalizes adding extra parameters, and depending on the type of regularization will shrink all coefficients (ridge), or will set a number of coefficients to 0 while maintaining the other coefficients as far as the budget allows (lasso)
Why do smaller weights result in simpler models in regularization? A simple intuition is the following. Remember that for regularization the features should be standardized in order to have approx. the same scale. Let's say that the minimisation function is only the
8,055
Why do smaller weights result in simpler models in regularization?
By adding Guassian noise to the input, the learning model will behave like an L2-penalty regularizer. To see why, consider a linear regression where i.i.d. noise is added to the features. The loss will now be a function of the errors + contribution of the weights norm. see derivation: https://www.youtube.com/watch?v=qw4vtBYhLp0
Why do smaller weights result in simpler models in regularization?
By adding Guassian noise to the input, the learning model will behave like an L2-penalty regularizer. To see why, consider a linear regression where i.i.d. noise is added to the features. The loss wil
Why do smaller weights result in simpler models in regularization? By adding Guassian noise to the input, the learning model will behave like an L2-penalty regularizer. To see why, consider a linear regression where i.i.d. noise is added to the features. The loss will now be a function of the errors + contribution of the weights norm. see derivation: https://www.youtube.com/watch?v=qw4vtBYhLp0
Why do smaller weights result in simpler models in regularization? By adding Guassian noise to the input, the learning model will behave like an L2-penalty regularizer. To see why, consider a linear regression where i.i.d. noise is added to the features. The loss wil
8,056
How and why does Batch Normalization use moving averages to track the accuracy of the model as it trains?
When using batch_normalization first thing we have to understand is that it works on two different ways when in Training and Testing. In Training we need to calculate mini batch mean in order to normalize the batch In the inference we just apply pre-calculated mini batch statistics So in the 2nd thing how to calculate this mini batch statics Here comes the moving average running_mean = momentum * running_mean + (1 - momentum) * sample_mean running_var = momentum * running_var + (1 - momentum) * sample_var
How and why does Batch Normalization use moving averages to track the accuracy of the model as it tr
When using batch_normalization first thing we have to understand is that it works on two different ways when in Training and Testing. In Training we need to calculate mini batch mean in order to norm
How and why does Batch Normalization use moving averages to track the accuracy of the model as it trains? When using batch_normalization first thing we have to understand is that it works on two different ways when in Training and Testing. In Training we need to calculate mini batch mean in order to normalize the batch In the inference we just apply pre-calculated mini batch statistics So in the 2nd thing how to calculate this mini batch statics Here comes the moving average running_mean = momentum * running_mean + (1 - momentum) * sample_mean running_var = momentum * running_var + (1 - momentum) * sample_var
How and why does Batch Normalization use moving averages to track the accuracy of the model as it tr When using batch_normalization first thing we have to understand is that it works on two different ways when in Training and Testing. In Training we need to calculate mini batch mean in order to norm
8,057
How and why does Batch Normalization use moving averages to track the accuracy of the model as it trains?
They are talking about batch normalization, which they have described for the training procedure but not for inference. This is a process of normalizing the hidden units using sample means etc. In this section they explain what to do for the inference stage, when you are just making predictions ( ie after training has completed). However, in stopped validation you interleave prediction on validation set with training to estimate your validation error. So during this process you don't have a population average (the averages are still changing as you train), so then you use a running average to calculate the batch norm parameters to calculate performance on validation set. It is in this sense that Using moving averages instead, we track the accuracy of the model as it trains. nothing to do with literally using the running means as a metric for neural network performance.
How and why does Batch Normalization use moving averages to track the accuracy of the model as it tr
They are talking about batch normalization, which they have described for the training procedure but not for inference. This is a process of normalizing the hidden units using sample means etc. In thi
How and why does Batch Normalization use moving averages to track the accuracy of the model as it trains? They are talking about batch normalization, which they have described for the training procedure but not for inference. This is a process of normalizing the hidden units using sample means etc. In this section they explain what to do for the inference stage, when you are just making predictions ( ie after training has completed). However, in stopped validation you interleave prediction on validation set with training to estimate your validation error. So during this process you don't have a population average (the averages are still changing as you train), so then you use a running average to calculate the batch norm parameters to calculate performance on validation set. It is in this sense that Using moving averages instead, we track the accuracy of the model as it trains. nothing to do with literally using the running means as a metric for neural network performance.
How and why does Batch Normalization use moving averages to track the accuracy of the model as it tr They are talking about batch normalization, which they have described for the training procedure but not for inference. This is a process of normalizing the hidden units using sample means etc. In thi
8,058
How and why does Batch Normalization use moving averages to track the accuracy of the model as it trains?
In the paper you referenced, the suggested test time behavior is to compute sample mean and variance for each feature using a large number of training images rather than using a running average. This block of code running_mean = momentum * running_mean + (1 - momentum) * sample_mean running_var = momentum * running_var + (1 - momentum) * sample_var represents an alternative approach for test time that doesn't require the extra estimation step needed in the paper. For the alternative moving average we just update the mean and variance using exponential decay model based on the momentum parameter.
How and why does Batch Normalization use moving averages to track the accuracy of the model as it tr
In the paper you referenced, the suggested test time behavior is to compute sample mean and variance for each feature using a large number of training images rather than using a running average. This
How and why does Batch Normalization use moving averages to track the accuracy of the model as it trains? In the paper you referenced, the suggested test time behavior is to compute sample mean and variance for each feature using a large number of training images rather than using a running average. This block of code running_mean = momentum * running_mean + (1 - momentum) * sample_mean running_var = momentum * running_var + (1 - momentum) * sample_var represents an alternative approach for test time that doesn't require the extra estimation step needed in the paper. For the alternative moving average we just update the mean and variance using exponential decay model based on the momentum parameter.
How and why does Batch Normalization use moving averages to track the accuracy of the model as it tr In the paper you referenced, the suggested test time behavior is to compute sample mean and variance for each feature using a large number of training images rather than using a running average. This
8,059
Machine learning techniques for parsing strings?
This can be seen as a sequence labeling problem, in which you have a sequence of tokens and want to give a classification for each one. You can use hidden Markov models (HMM) or conditional random fields (CRF) to solve the problem. There are good implementations of HMM and CRF in an open-source package called Mallet. In your example, you should convert the input to the format below. Moreover, you should generate extra-features. 1600 STREET Pennsylvania STREET Ave STREET , OUT Washington CITY , OUT DC PROVINCE 20500 POSTCODE USA COUNTRY
Machine learning techniques for parsing strings?
This can be seen as a sequence labeling problem, in which you have a sequence of tokens and want to give a classification for each one. You can use hidden Markov models (HMM) or conditional random fie
Machine learning techniques for parsing strings? This can be seen as a sequence labeling problem, in which you have a sequence of tokens and want to give a classification for each one. You can use hidden Markov models (HMM) or conditional random fields (CRF) to solve the problem. There are good implementations of HMM and CRF in an open-source package called Mallet. In your example, you should convert the input to the format below. Moreover, you should generate extra-features. 1600 STREET Pennsylvania STREET Ave STREET , OUT Washington CITY , OUT DC PROVINCE 20500 POSTCODE USA COUNTRY
Machine learning techniques for parsing strings? This can be seen as a sequence labeling problem, in which you have a sequence of tokens and want to give a classification for each one. You can use hidden Markov models (HMM) or conditional random fie
8,060
Machine learning techniques for parsing strings?
This sounds like a problem to be solved with bidirectional LSTM classification. You tag each character of the sample as one category for example street: 1 city: 2 province: 3 postcode: 4 country: 5 1600 Pennsylvania Ave, Washington, DC 20500 USA 111111111111111111111, 2222222222, 33 44444 555 Now, train your classifier based on these labels. Boom!
Machine learning techniques for parsing strings?
This sounds like a problem to be solved with bidirectional LSTM classification. You tag each character of the sample as one category for example street: 1 city: 2 province: 3 postcode: 4 country: 5 1
Machine learning techniques for parsing strings? This sounds like a problem to be solved with bidirectional LSTM classification. You tag each character of the sample as one category for example street: 1 city: 2 province: 3 postcode: 4 country: 5 1600 Pennsylvania Ave, Washington, DC 20500 USA 111111111111111111111, 2222222222, 33 44444 555 Now, train your classifier based on these labels. Boom!
Machine learning techniques for parsing strings? This sounds like a problem to be solved with bidirectional LSTM classification. You tag each character of the sample as one category for example street: 1 city: 2 province: 3 postcode: 4 country: 5 1
8,061
Machine learning techniques for parsing strings?
I had to solve a very similar problem to validate whether an address is valid or invalid. Typically address have the structure "1600 Pennsylvania Ave, Washington DC, 20500" A string such as "I went down 2000 steps and reached Pennsylvania Ave in Washington DC." is not a valid address. This can be solved by classification techniques such as SVM, Neural Networks etc. The idea is to identify a key set of features. Some of these could be: 1) Does the street name start with a valid block number. Most US block numbers are either numbers (e.g. 1200) or a number followed by a single letter (120A) or a number following a single letter (e.g. S200). 2) If the address is well formatted, the street names end in suffixes like Ave for avenue, Dr for Drive, Blvd for Boulevard. It is possible to obtain the US street suffix list from USPS site. 3) The number of words in the street address field can also be an interesting feature. If there are too many words, it is probably not a valid address. E.g. see the example above. 4) How many words occur between the block number and the street suffix in address field ? These can be used to train a learning algorithm and the resulting model can be used to validate if a given address is valid or not.
Machine learning techniques for parsing strings?
I had to solve a very similar problem to validate whether an address is valid or invalid. Typically address have the structure "1600 Pennsylvania Ave, Washington DC, 20500" A string such as "I went do
Machine learning techniques for parsing strings? I had to solve a very similar problem to validate whether an address is valid or invalid. Typically address have the structure "1600 Pennsylvania Ave, Washington DC, 20500" A string such as "I went down 2000 steps and reached Pennsylvania Ave in Washington DC." is not a valid address. This can be solved by classification techniques such as SVM, Neural Networks etc. The idea is to identify a key set of features. Some of these could be: 1) Does the street name start with a valid block number. Most US block numbers are either numbers (e.g. 1200) or a number followed by a single letter (120A) or a number following a single letter (e.g. S200). 2) If the address is well formatted, the street names end in suffixes like Ave for avenue, Dr for Drive, Blvd for Boulevard. It is possible to obtain the US street suffix list from USPS site. 3) The number of words in the street address field can also be an interesting feature. If there are too many words, it is probably not a valid address. E.g. see the example above. 4) How many words occur between the block number and the street suffix in address field ? These can be used to train a learning algorithm and the resulting model can be used to validate if a given address is valid or not.
Machine learning techniques for parsing strings? I had to solve a very similar problem to validate whether an address is valid or invalid. Typically address have the structure "1600 Pennsylvania Ave, Washington DC, 20500" A string such as "I went do
8,062
Machine learning techniques for parsing strings?
This is a bit of a hack that does not require your own solution: reverse geocoding. This can either give you cleaner data or actually do all the work for you. For example, here's some Stata code with geocode3 from SSC, which uses Google. I guess this is similar to Fuzzy Gazetteer. The first address is pretty messy, the second is clean, and the third is foreign. Other software can handle this is as well. clear set obs 3 gen address ="" replace address = "Big Foot Museum in Felton CA" in 1 replace address = "1600 Pennsylvania Ave, Washington, DC 20500 USA" in 2 replace address = "ул. Ильинка, д. 23 103132, Москва, Россия" in 3 geocode3, address(address) gen coord = string(g_lat) + "," + string(g_lon) geocode3, reverse coord(coord) This works reasonably well: . list r_addr , clean noobs r_addr 121 San Lorenzo Avenue, Felton, CA 95018, USA 1600 Pennsylvania Avenue Northwest, President's Park, Washington, DC 20500, USA ulitsa Ilyinka, 23, Moscow, Russia, 101000 The Kremlin does have a pretty different format.
Machine learning techniques for parsing strings?
This is a bit of a hack that does not require your own solution: reverse geocoding. This can either give you cleaner data or actually do all the work for you. For example, here's some Stata code with
Machine learning techniques for parsing strings? This is a bit of a hack that does not require your own solution: reverse geocoding. This can either give you cleaner data or actually do all the work for you. For example, here's some Stata code with geocode3 from SSC, which uses Google. I guess this is similar to Fuzzy Gazetteer. The first address is pretty messy, the second is clean, and the third is foreign. Other software can handle this is as well. clear set obs 3 gen address ="" replace address = "Big Foot Museum in Felton CA" in 1 replace address = "1600 Pennsylvania Ave, Washington, DC 20500 USA" in 2 replace address = "ул. Ильинка, д. 23 103132, Москва, Россия" in 3 geocode3, address(address) gen coord = string(g_lat) + "," + string(g_lon) geocode3, reverse coord(coord) This works reasonably well: . list r_addr , clean noobs r_addr 121 San Lorenzo Avenue, Felton, CA 95018, USA 1600 Pennsylvania Avenue Northwest, President's Park, Washington, DC 20500, USA ulitsa Ilyinka, 23, Moscow, Russia, 101000 The Kremlin does have a pretty different format.
Machine learning techniques for parsing strings? This is a bit of a hack that does not require your own solution: reverse geocoding. This can either give you cleaner data or actually do all the work for you. For example, here's some Stata code with
8,063
Negative binomial regression question - is it a poor model?
I dispute the assertions from several points of view: i) While the canonical link may well be 'problematic', it's not immediately obvious that someone will be interested in that link - whereas, for example, the log-link in the Poisson is often both convenient and natural, and so people are often interested in that. Even so, in the Poisson case people do look at other link functions. So we needn't restrict our consideration to the canonical link. A 'problematic link' is not of itself a especially telling argument against negative binomial regression. The log-link, for example, seems to be quite a reasonable choice in some negative binomial applications, for example, in the cases where the data might be conditionally Poisson but there's heterogeneity in the Poisson rate - the log link can be almost as interpretable as it is in the Poisson case. By comparison, I use Gamma GLMs reasonably often, but I don't recall (textbook examples aside) ever having used its canonical link - I use the log-link almost always, since it's a more natural link to use for the kinds of problems I tend to work with. ii) "Little seems to have been made ... in applications" may have been just about true in 1989, but I don't think it stands now. [Even if it did stand now, that's not an argument that it's a poor model, only that it's not been widely used - which might happen for all manner of reasons.] Negative binomial regression has become more widely used as it's more widely available, and I see it used in applications much more widely now. In R, for example, I make use of the functions in MASS that support it (and the corresponding book, Venables and Ripley's, Modern Applied Statistics with S, uses negative binomial regression in some interesting applications) -- and I've used some functionality in a few other packages even before I used it in R. I would have used negative binomial regression more, even earlier, if it had been readily available to me; I expect the same is true of many people - so the argument that it was little used seems to be more one of opportunity. While it's possible to avoid negative binomial regression, (say by using overdispersed Poisson models), or a number of situations where it really doesn't matter much what you do, there are various reasons why that's not entirely satisfactory. For example, when my interest is more toward prediction intervals than estimates of coefficients, the fact that the coefficients don't change may not be an adequate reason to avoid the negative binomial. Of course there are still other choices that model the dispersion (such as the Conway-Maxwell-Poisson that is the subject of the paper you mentioned); while those are certainly options, there are sometimes situations where I am quite happy that the negative binomial is a reasonably good 'fit' as a model for my problem. Are all these uses and recommendations in error? I really don't think so! If they were, it should have become reasonably clear by now. Indeed, if McCullagh and Nelder had continued to feel the same way, they had no lack of opportunity, nor any lack of forums in which to clarify the remaining issues. Nelder has passed away (2010), but McCullagh is apparently still around. If that short passage in McCullagh and Nelder is all they have, I'd say that's a pretty weak argument. What are the consequences of this problematic link? I think the issue is mainly one of the variance function and the link function being related rather than unrelated (as is the case for pretty much all the other main GLM families in popular use), which makes the interpretation on the scale of the linear predictor less straightforward (that's not to say it's the only issue; I do think it's the main issue for a practitioner). It's not much of a deal. By way of comparison, I see Tweedie models being used much more widely in recent times, and I don't see people concerning themselves with the fact that $p$ appears both in the variance function and the canonical link (nor in most cases even worrying much about the canonical link). None of this is to take anything away from Conway-Maxwell-Poisson models (the subject of the Sellers and Shmueli paper), which are also becoming more widely used -- I certainly don't wish to take part in a negative binomial vs COM-Poisson shooting match. I simply don't see it as one-or-the-other, any more than (now speaking more widely) I take a purely Bayesian nor purely frequentist stance on statistical problems. I'll use whatever strikes me as the best choice in the particular circumstances I am in, and each choice tends to have advantages and disadvantages.
Negative binomial regression question - is it a poor model?
I dispute the assertions from several points of view: i) While the canonical link may well be 'problematic', it's not immediately obvious that someone will be interested in that link - whereas, for ex
Negative binomial regression question - is it a poor model? I dispute the assertions from several points of view: i) While the canonical link may well be 'problematic', it's not immediately obvious that someone will be interested in that link - whereas, for example, the log-link in the Poisson is often both convenient and natural, and so people are often interested in that. Even so, in the Poisson case people do look at other link functions. So we needn't restrict our consideration to the canonical link. A 'problematic link' is not of itself a especially telling argument against negative binomial regression. The log-link, for example, seems to be quite a reasonable choice in some negative binomial applications, for example, in the cases where the data might be conditionally Poisson but there's heterogeneity in the Poisson rate - the log link can be almost as interpretable as it is in the Poisson case. By comparison, I use Gamma GLMs reasonably often, but I don't recall (textbook examples aside) ever having used its canonical link - I use the log-link almost always, since it's a more natural link to use for the kinds of problems I tend to work with. ii) "Little seems to have been made ... in applications" may have been just about true in 1989, but I don't think it stands now. [Even if it did stand now, that's not an argument that it's a poor model, only that it's not been widely used - which might happen for all manner of reasons.] Negative binomial regression has become more widely used as it's more widely available, and I see it used in applications much more widely now. In R, for example, I make use of the functions in MASS that support it (and the corresponding book, Venables and Ripley's, Modern Applied Statistics with S, uses negative binomial regression in some interesting applications) -- and I've used some functionality in a few other packages even before I used it in R. I would have used negative binomial regression more, even earlier, if it had been readily available to me; I expect the same is true of many people - so the argument that it was little used seems to be more one of opportunity. While it's possible to avoid negative binomial regression, (say by using overdispersed Poisson models), or a number of situations where it really doesn't matter much what you do, there are various reasons why that's not entirely satisfactory. For example, when my interest is more toward prediction intervals than estimates of coefficients, the fact that the coefficients don't change may not be an adequate reason to avoid the negative binomial. Of course there are still other choices that model the dispersion (such as the Conway-Maxwell-Poisson that is the subject of the paper you mentioned); while those are certainly options, there are sometimes situations where I am quite happy that the negative binomial is a reasonably good 'fit' as a model for my problem. Are all these uses and recommendations in error? I really don't think so! If they were, it should have become reasonably clear by now. Indeed, if McCullagh and Nelder had continued to feel the same way, they had no lack of opportunity, nor any lack of forums in which to clarify the remaining issues. Nelder has passed away (2010), but McCullagh is apparently still around. If that short passage in McCullagh and Nelder is all they have, I'd say that's a pretty weak argument. What are the consequences of this problematic link? I think the issue is mainly one of the variance function and the link function being related rather than unrelated (as is the case for pretty much all the other main GLM families in popular use), which makes the interpretation on the scale of the linear predictor less straightforward (that's not to say it's the only issue; I do think it's the main issue for a practitioner). It's not much of a deal. By way of comparison, I see Tweedie models being used much more widely in recent times, and I don't see people concerning themselves with the fact that $p$ appears both in the variance function and the canonical link (nor in most cases even worrying much about the canonical link). None of this is to take anything away from Conway-Maxwell-Poisson models (the subject of the Sellers and Shmueli paper), which are also becoming more widely used -- I certainly don't wish to take part in a negative binomial vs COM-Poisson shooting match. I simply don't see it as one-or-the-other, any more than (now speaking more widely) I take a purely Bayesian nor purely frequentist stance on statistical problems. I'll use whatever strikes me as the best choice in the particular circumstances I am in, and each choice tends to have advantages and disadvantages.
Negative binomial regression question - is it a poor model? I dispute the assertions from several points of view: i) While the canonical link may well be 'problematic', it's not immediately obvious that someone will be interested in that link - whereas, for ex
8,064
Degrees of freedom of $\chi^2$ in Hosmer-Lemeshow test
Hosmer D.W., Lemeshow S. (1980), A goodness-of-fit test for the multiple logistic regression model. Communications in Statistics, A10, 1043-1069 show that: If the model is a logistic regression model and the $p$ parameters are estimated by maximum likelihood and the $G$ groups are defined on the estimated probabilities then it holds that $X^2$ is asymptotically $\chi^2(G-p-1)+\sum_{i=1}^{p+1} \lambda_i \chi_i^2(1)$ (Hosmer,Lemeshow, 1980, p.1052, Theorem 2). (Note: the necessary conditions are not explicitly in Theorem 2 on page 1052 but if one attentively reads the paper and the proof then these pop up) The second term $\sum_{i=1}^{p+1} \lambda_i \chi_i^2(1)$ results from the fact that the grouping is based on estimated - i.e. random - quantities (Hosmer,Lemeshow, 1980, p.1051) Using simulations they showed that the second term can be (in the cases used in the simualtion) approximated by a $\chi^2(p-1)$ (Hosmer,Lemeshow, 1980, p.1060) Combining these two facts results in a sum of two $\chi^2$ variables, one with $G-p-1$ degrees of freedom and a second one with $p-1$ degrees of freedom or $X^2 \sim \chi^2(G-p-1+p-1=G-2)$ So the answer to the question lies in the occurrence of the 'weighted chi-square term' or in the fact that the groups are defined using estimated probabilities that are themselves random variables. See also Hosmer Lemeshow (1980) Paper - Theorem 2
Degrees of freedom of $\chi^2$ in Hosmer-Lemeshow test
Hosmer D.W., Lemeshow S. (1980), A goodness-of-fit test for the multiple logistic regression model. Communications in Statistics, A10, 1043-1069 show that: If the model is a logistic regression mode
Degrees of freedom of $\chi^2$ in Hosmer-Lemeshow test Hosmer D.W., Lemeshow S. (1980), A goodness-of-fit test for the multiple logistic regression model. Communications in Statistics, A10, 1043-1069 show that: If the model is a logistic regression model and the $p$ parameters are estimated by maximum likelihood and the $G$ groups are defined on the estimated probabilities then it holds that $X^2$ is asymptotically $\chi^2(G-p-1)+\sum_{i=1}^{p+1} \lambda_i \chi_i^2(1)$ (Hosmer,Lemeshow, 1980, p.1052, Theorem 2). (Note: the necessary conditions are not explicitly in Theorem 2 on page 1052 but if one attentively reads the paper and the proof then these pop up) The second term $\sum_{i=1}^{p+1} \lambda_i \chi_i^2(1)$ results from the fact that the grouping is based on estimated - i.e. random - quantities (Hosmer,Lemeshow, 1980, p.1051) Using simulations they showed that the second term can be (in the cases used in the simualtion) approximated by a $\chi^2(p-1)$ (Hosmer,Lemeshow, 1980, p.1060) Combining these two facts results in a sum of two $\chi^2$ variables, one with $G-p-1$ degrees of freedom and a second one with $p-1$ degrees of freedom or $X^2 \sim \chi^2(G-p-1+p-1=G-2)$ So the answer to the question lies in the occurrence of the 'weighted chi-square term' or in the fact that the groups are defined using estimated probabilities that are themselves random variables. See also Hosmer Lemeshow (1980) Paper - Theorem 2
Degrees of freedom of $\chi^2$ in Hosmer-Lemeshow test Hosmer D.W., Lemeshow S. (1980), A goodness-of-fit test for the multiple logistic regression model. Communications in Statistics, A10, 1043-1069 show that: If the model is a logistic regression mode
8,065
Degrees of freedom of $\chi^2$ in Hosmer-Lemeshow test
The theorem that you refer to (the usual reduction part "usual reduction of degrees of freedom due to estimated parameters") has been mostly advocated by R.A. Fisher. In 'On the interpretation of Chi Square from Contingency Tables, and the Calculation of P' (1922) he argued to use the $(R-1) * (C-1)$ rule and in 'The goodness of fit of regression formulae' (1922) he argues to reduce the degrees of freedom by the number of parameters used in the regression to obtain expected values from the data. (It is interesting to note that people misused the chi-square test, with wrong degrees of freedom, for more than twenty years since it's introduction in 1900) Your case is of the second kind (regression) and not of the former kind (contingency table) although the two are related in that they are linear restrictions on the parameters. Because you model the expected values, based on your observed values, and you do this with a model that has two parameters, the 'usual' reduction in degrees of freedom is two plus one (an extra one because the O_i need to sum up to a total, which is another linear restriction, and you end up effectively with a reduction of two, instead of three, because of the 'in-efficiency' of the modeled expected values). The chi-square test uses a $\chi^2$ as a distance measure to express how close a result is to the expected data. In the many versions of the chi-square tests the distribution of this 'distance' is related to the sum of deviations in normal distributed variables (which is true in the limit only and is an approximation if you deal with non-normal distributed data). For the multivariate normal distribution the density function is related to the $\chi^2$ by $f(x_1,...,x_k) = \frac{e^{- \frac{1}{2}\chi^2} }{\sqrt{(2\pi)^k \vert \mathbf{\Sigma}\vert}}$ with $\vert \mathbf{\Sigma}\vert$ the determinant of the covariance matrix of $\mathbf{x}$ and $\chi^2 = (\mathbf{x}-\mathbf{\mu})^T \mathbf{\Sigma}^{-1}(\mathbf{x}-\mathbf{\mu})$ is the mahalanobis distance which reduces to the Euclidian distance if $\mathbf{\Sigma}=\mathbf{I}$. In his 1900 article Pearson argued that the $\chi^2$-levels are spheroids and that he can transform to spherical coordinates in order to integrate a value such as $P(\chi^2 > a)$. Which becomes a single integral. It is this geometrical representation, $\chi^2$ as a distance and also a term in density function, that can help to understand the reduction of degrees of freedom when linear restrictions are present. First the case of a 2x2 contingency table. You should notice that the four values $\frac{O_i-E_i}{E_i}$ are not four independent normal distributed variables. They are instead related to each other and boil down to a single variable. Lets use the table $O_{ij} = \begin{array}{cc} o_{11} & o_{12} \\ o_{21} & o_{22} \end{array}$ then if the expected values $E_{ij} = \begin{array}{cc} e_{11} & e_{12} \\ e_{21} & e_{22} \end{array}$ where fixed then $\sum \frac{o_{ij}-e_{ij}}{e_{ij}}$ would be distributed as a chi-square distribution with four degrees of freedom but often we estimate the $e_{ij}$ based on the $o_{ij}$ and the variation is not like four independent variables. Instead we get that all the differences between $o$ and $e$ are the same $ \begin{array}\\&(o_{11}-e_{11}) &=\\ &(o_{22}-e_{22}) &=\\ -&(o_{21}-e_{21}) &=\\ -&(o_{12}-e_{12}) &= o_{11} - \frac{(o_{11}+o_{12})(o_{11}+o_{21})}{(o_{11}+o_{12}+o_{21}+o_{22})} \end{array}$ and they are effectively a single variable rather than four. Geometrically you can see this as the $\chi^2$ value not integrated on a four dimensional sphere but on a single line. Note that this contingency table test is not the case for the contingency table in the Hosmer-Lemeshow test (it uses a different null hypothesis!). See also section 2.1 'the case when $\beta_0$ and $\underline\beta$ are known' in the article of Hosmer and Lemshow. In their case you get 2g-1 degrees of freedom and not g-1 degrees of freedom as in the (R-1)(C-1) rule. This (R-1)(C-1) rule is specifically the case for the null hypothesis that row and column variables are independent (which creates R+C-1 constraints on the $o_i-e_i$ values). The Hosmer-Lemeshow test relates to the hypothesis that the cells are filled according to the probabilities of a logistic regression model based on $four$ parameters in the case of distributional assumption A and $p+1$ parameters in the case of distributional assumption B. Second the case of a regression. A regression does something similar to the difference $o-e$ as the contingency table and reduces the dimensionality of the variation. There is a nice geometrical representation for this as the value $y_i$ can be represented as the sum of a model term $\beta x_i$ and a residual (not error) terms $\epsilon_i$. These model term and residual term each represent a dimensional space that is perpendicular to each other. That means the residual terms $\epsilon_i$ can not take any possible value! Namely they are reduced by the part which projects on the model, and more particular 1 dimension for each parameter in the model. Maybe the following images can help a bit Below are 400 times three (uncorrelated) variables from the binomial distributions $B(n=60,p={1/6,2/6,3/6})$. They relate to normal distributed variables $N(\mu=n*p,\sigma^2=n*p*(1-p))$. In the same image we draw the iso-surface for $\chi^2={1,2,6}$. Integrating over this space by using the spherical coordinates such that we only need a single integration (because changing the angle does not change the density), over $\chi$ results in $\int_0^a e^{-\frac{1}{2} \chi^2 }\chi^{d-1} d\chi$ in which this $\chi^{d-1}$ part represents the area of the d-dimensional sphere. If we would limit the variables $\chi$ in some way than the integration would not be over a d-dimensional sphere but something of lower dimension. The image below can be used to get an idea of the dimensional reduction in the residual terms. It explains the least squares fitting method in geometric term. In blue you have measurements. In red you have what the model allows. The measurement is often not exactly equal to the model and has some deviation. You can regard this, geometrically, as the distance from the measured point to the red surface. The red arrows $mu_1$ and $mu_2$ have values $(1,1,1)$ and $(0,1,2)$ and could be related to some linear model as x = a + b * z + error or $\begin{bmatrix}x_{1}\\x_{2}\\x_{3}\end{bmatrix} = a \begin{bmatrix}1\\1\\1\end{bmatrix} + b \begin{bmatrix}0\\1\\2\end{bmatrix} + \begin{bmatrix}\epsilon_1\\\epsilon_2\\\epsilon_3\end{bmatrix} $ so the span of those two vectors $(1,1,1)$ and $(0,1,2)$ (the red plane) are the values for $x$ that are possible in the regression model and $\epsilon$ is a vector that is the difference between the observed value and the regression/modeled value. In the least squares method this vector is perpendicular (least distance is least sum of squares) to the red surface (and the modeled value is the projection of the observed value onto the red surface). So this difference between observed and (modelled) expected is a sum of vectors that are perpendicular to the model vector (and this space has dimension of the total space minus the number of model vectors). In our simple example case. The total dimension is 3. The model has 2 dimensions. And the error has a dimension 1 (so no matter which of those blue points you take, the green arrows show a single example, the error terms have always the same ratio, follow a single vector). I hope this explanation helps. It is in no way a rigorous proof and there are some special algebraic tricks that need to be solved in these geometric representations. But anyway I like these two geometrical representations. The one for the trick of Pearson to integrate the $\chi^2$ by using the spherical coordinates, and the other for viewing the sum of least squares method as a projection onto a plane (or larger span). I am always amazed how we end up with $\frac{o-e}{e}$, this is in my point of view not trivial since the normal approximation of a binomial is not a devision by $e$ but by $np(1-p)$ and in the case of contingency tables you can work it out easily but in the case of the regression or other linear restrictions it does not work out so easily while the literature is often very easy in arguing that 'it works out the same for other linear restrictions'. (An interesting example of the problem. If you performe the following test multiple times 'throw 2 times 10 times a coin and only register the cases in which the sum is 10' then you do not get the typical chi-square distribution for this "simple" linear restriction)
Degrees of freedom of $\chi^2$ in Hosmer-Lemeshow test
The theorem that you refer to (the usual reduction part "usual reduction of degrees of freedom due to estimated parameters") has been mostly advocated by R.A. Fisher. In 'On the interpretation of Chi
Degrees of freedom of $\chi^2$ in Hosmer-Lemeshow test The theorem that you refer to (the usual reduction part "usual reduction of degrees of freedom due to estimated parameters") has been mostly advocated by R.A. Fisher. In 'On the interpretation of Chi Square from Contingency Tables, and the Calculation of P' (1922) he argued to use the $(R-1) * (C-1)$ rule and in 'The goodness of fit of regression formulae' (1922) he argues to reduce the degrees of freedom by the number of parameters used in the regression to obtain expected values from the data. (It is interesting to note that people misused the chi-square test, with wrong degrees of freedom, for more than twenty years since it's introduction in 1900) Your case is of the second kind (regression) and not of the former kind (contingency table) although the two are related in that they are linear restrictions on the parameters. Because you model the expected values, based on your observed values, and you do this with a model that has two parameters, the 'usual' reduction in degrees of freedom is two plus one (an extra one because the O_i need to sum up to a total, which is another linear restriction, and you end up effectively with a reduction of two, instead of three, because of the 'in-efficiency' of the modeled expected values). The chi-square test uses a $\chi^2$ as a distance measure to express how close a result is to the expected data. In the many versions of the chi-square tests the distribution of this 'distance' is related to the sum of deviations in normal distributed variables (which is true in the limit only and is an approximation if you deal with non-normal distributed data). For the multivariate normal distribution the density function is related to the $\chi^2$ by $f(x_1,...,x_k) = \frac{e^{- \frac{1}{2}\chi^2} }{\sqrt{(2\pi)^k \vert \mathbf{\Sigma}\vert}}$ with $\vert \mathbf{\Sigma}\vert$ the determinant of the covariance matrix of $\mathbf{x}$ and $\chi^2 = (\mathbf{x}-\mathbf{\mu})^T \mathbf{\Sigma}^{-1}(\mathbf{x}-\mathbf{\mu})$ is the mahalanobis distance which reduces to the Euclidian distance if $\mathbf{\Sigma}=\mathbf{I}$. In his 1900 article Pearson argued that the $\chi^2$-levels are spheroids and that he can transform to spherical coordinates in order to integrate a value such as $P(\chi^2 > a)$. Which becomes a single integral. It is this geometrical representation, $\chi^2$ as a distance and also a term in density function, that can help to understand the reduction of degrees of freedom when linear restrictions are present. First the case of a 2x2 contingency table. You should notice that the four values $\frac{O_i-E_i}{E_i}$ are not four independent normal distributed variables. They are instead related to each other and boil down to a single variable. Lets use the table $O_{ij} = \begin{array}{cc} o_{11} & o_{12} \\ o_{21} & o_{22} \end{array}$ then if the expected values $E_{ij} = \begin{array}{cc} e_{11} & e_{12} \\ e_{21} & e_{22} \end{array}$ where fixed then $\sum \frac{o_{ij}-e_{ij}}{e_{ij}}$ would be distributed as a chi-square distribution with four degrees of freedom but often we estimate the $e_{ij}$ based on the $o_{ij}$ and the variation is not like four independent variables. Instead we get that all the differences between $o$ and $e$ are the same $ \begin{array}\\&(o_{11}-e_{11}) &=\\ &(o_{22}-e_{22}) &=\\ -&(o_{21}-e_{21}) &=\\ -&(o_{12}-e_{12}) &= o_{11} - \frac{(o_{11}+o_{12})(o_{11}+o_{21})}{(o_{11}+o_{12}+o_{21}+o_{22})} \end{array}$ and they are effectively a single variable rather than four. Geometrically you can see this as the $\chi^2$ value not integrated on a four dimensional sphere but on a single line. Note that this contingency table test is not the case for the contingency table in the Hosmer-Lemeshow test (it uses a different null hypothesis!). See also section 2.1 'the case when $\beta_0$ and $\underline\beta$ are known' in the article of Hosmer and Lemshow. In their case you get 2g-1 degrees of freedom and not g-1 degrees of freedom as in the (R-1)(C-1) rule. This (R-1)(C-1) rule is specifically the case for the null hypothesis that row and column variables are independent (which creates R+C-1 constraints on the $o_i-e_i$ values). The Hosmer-Lemeshow test relates to the hypothesis that the cells are filled according to the probabilities of a logistic regression model based on $four$ parameters in the case of distributional assumption A and $p+1$ parameters in the case of distributional assumption B. Second the case of a regression. A regression does something similar to the difference $o-e$ as the contingency table and reduces the dimensionality of the variation. There is a nice geometrical representation for this as the value $y_i$ can be represented as the sum of a model term $\beta x_i$ and a residual (not error) terms $\epsilon_i$. These model term and residual term each represent a dimensional space that is perpendicular to each other. That means the residual terms $\epsilon_i$ can not take any possible value! Namely they are reduced by the part which projects on the model, and more particular 1 dimension for each parameter in the model. Maybe the following images can help a bit Below are 400 times three (uncorrelated) variables from the binomial distributions $B(n=60,p={1/6,2/6,3/6})$. They relate to normal distributed variables $N(\mu=n*p,\sigma^2=n*p*(1-p))$. In the same image we draw the iso-surface for $\chi^2={1,2,6}$. Integrating over this space by using the spherical coordinates such that we only need a single integration (because changing the angle does not change the density), over $\chi$ results in $\int_0^a e^{-\frac{1}{2} \chi^2 }\chi^{d-1} d\chi$ in which this $\chi^{d-1}$ part represents the area of the d-dimensional sphere. If we would limit the variables $\chi$ in some way than the integration would not be over a d-dimensional sphere but something of lower dimension. The image below can be used to get an idea of the dimensional reduction in the residual terms. It explains the least squares fitting method in geometric term. In blue you have measurements. In red you have what the model allows. The measurement is often not exactly equal to the model and has some deviation. You can regard this, geometrically, as the distance from the measured point to the red surface. The red arrows $mu_1$ and $mu_2$ have values $(1,1,1)$ and $(0,1,2)$ and could be related to some linear model as x = a + b * z + error or $\begin{bmatrix}x_{1}\\x_{2}\\x_{3}\end{bmatrix} = a \begin{bmatrix}1\\1\\1\end{bmatrix} + b \begin{bmatrix}0\\1\\2\end{bmatrix} + \begin{bmatrix}\epsilon_1\\\epsilon_2\\\epsilon_3\end{bmatrix} $ so the span of those two vectors $(1,1,1)$ and $(0,1,2)$ (the red plane) are the values for $x$ that are possible in the regression model and $\epsilon$ is a vector that is the difference between the observed value and the regression/modeled value. In the least squares method this vector is perpendicular (least distance is least sum of squares) to the red surface (and the modeled value is the projection of the observed value onto the red surface). So this difference between observed and (modelled) expected is a sum of vectors that are perpendicular to the model vector (and this space has dimension of the total space minus the number of model vectors). In our simple example case. The total dimension is 3. The model has 2 dimensions. And the error has a dimension 1 (so no matter which of those blue points you take, the green arrows show a single example, the error terms have always the same ratio, follow a single vector). I hope this explanation helps. It is in no way a rigorous proof and there are some special algebraic tricks that need to be solved in these geometric representations. But anyway I like these two geometrical representations. The one for the trick of Pearson to integrate the $\chi^2$ by using the spherical coordinates, and the other for viewing the sum of least squares method as a projection onto a plane (or larger span). I am always amazed how we end up with $\frac{o-e}{e}$, this is in my point of view not trivial since the normal approximation of a binomial is not a devision by $e$ but by $np(1-p)$ and in the case of contingency tables you can work it out easily but in the case of the regression or other linear restrictions it does not work out so easily while the literature is often very easy in arguing that 'it works out the same for other linear restrictions'. (An interesting example of the problem. If you performe the following test multiple times 'throw 2 times 10 times a coin and only register the cases in which the sum is 10' then you do not get the typical chi-square distribution for this "simple" linear restriction)
Degrees of freedom of $\chi^2$ in Hosmer-Lemeshow test The theorem that you refer to (the usual reduction part "usual reduction of degrees of freedom due to estimated parameters") has been mostly advocated by R.A. Fisher. In 'On the interpretation of Chi
8,066
Can you be 93,75% confident from a random sample of only five from a population of 10 000?
Let's ignore the numbers for a bit. If we draw five observations from the population, the probability that all five observations are above the median is $\left({1\over 2}\right)^5 = 1/32 = 0.03125$, and similarly for the probability that all five observations are below the median. As the events "above the median" and "below the median" are mutually exclusive, we can calculate the probability that all five observations are either entirely above the median or entirely below the median as the sum of the probabilities: $0.03125 + 0.03125 = 0.0625$. Consequently, the probability that a sample will "enclose" the median is just $1 - 0.0625 = 0.9375$. After you've drawn the sample, of course, probabilities don't apply anymore, but you can construct a $93.75\%$ confidence interval for the median in the obvious way by using the largest and smallest observations.
Can you be 93,75% confident from a random sample of only five from a population of 10 000?
Let's ignore the numbers for a bit. If we draw five observations from the population, the probability that all five observations are above the median is $\left({1\over 2}\right)^5 = 1/32 = 0.03125$,
Can you be 93,75% confident from a random sample of only five from a population of 10 000? Let's ignore the numbers for a bit. If we draw five observations from the population, the probability that all five observations are above the median is $\left({1\over 2}\right)^5 = 1/32 = 0.03125$, and similarly for the probability that all five observations are below the median. As the events "above the median" and "below the median" are mutually exclusive, we can calculate the probability that all five observations are either entirely above the median or entirely below the median as the sum of the probabilities: $0.03125 + 0.03125 = 0.0625$. Consequently, the probability that a sample will "enclose" the median is just $1 - 0.0625 = 0.9375$. After you've drawn the sample, of course, probabilities don't apply anymore, but you can construct a $93.75\%$ confidence interval for the median in the obvious way by using the largest and smallest observations.
Can you be 93,75% confident from a random sample of only five from a population of 10 000? Let's ignore the numbers for a bit. If we draw five observations from the population, the probability that all five observations are above the median is $\left({1\over 2}\right)^5 = 1/32 = 0.03125$,
8,067
Can you be 93,75% confident from a random sample of only five from a population of 10 000?
Yes, this really works, under certain conditions, with a couple of caveats Random selection: You can't just ask any 5 people. It would need to be randomly selected from the population whose median you wanted an interval for. Understanding what a confidence interval means. The interval for a parameter will have a certain coverage ... but that doesn't necessarily correspond to how confident you personally are about it ... personal confidence is not the same thing as coverage. Specifically, that 93.75 percent is a frequentist probability - a long run proportion. Loosely, if you use the same methodology many, many times, about 93.75 percent of those intervals will include the population median. The calculation of the coverage is based on assuming continuous responses. It's not necessarily very useful; the range of 5 values will tend to be quite wide. The calculation of the coverage is mathematically straightforward (see the last paragraph below) but it's also easy to see via simulation. e.g. here's a quick simulation in R: mean(replicate(1000000,between(range(runif(5)),0.5))) [1] 0.937464 (where between is just: function(x, m) x[1]<m & x[2]>m; if you were doing it for a discrete variable you'd want <= and >= and to define your interval to be closed; it doesn't matter in the continuous case) It doesn't really matter how big the population was; this calculation effectively uses infinite population. A small population would not have a lower chance. I used the uniform distribution as a source of continuous random numbers but the same result would apply for any other continuous distribution, since the order relationships are unaltered by any monotonic transformation. With a continuous variable, the probability that all the values lay to the left of the population median would be $\frac12^5 = \frac{1}{32}$. Similarly for them all to be to the right. Consequently the coverage of the range of 5 randomly selected values is $\frac{15}{16} = 0.9375$.
Can you be 93,75% confident from a random sample of only five from a population of 10 000?
Yes, this really works, under certain conditions, with a couple of caveats Random selection: You can't just ask any 5 people. It would need to be randomly selected from the population whose median yo
Can you be 93,75% confident from a random sample of only five from a population of 10 000? Yes, this really works, under certain conditions, with a couple of caveats Random selection: You can't just ask any 5 people. It would need to be randomly selected from the population whose median you wanted an interval for. Understanding what a confidence interval means. The interval for a parameter will have a certain coverage ... but that doesn't necessarily correspond to how confident you personally are about it ... personal confidence is not the same thing as coverage. Specifically, that 93.75 percent is a frequentist probability - a long run proportion. Loosely, if you use the same methodology many, many times, about 93.75 percent of those intervals will include the population median. The calculation of the coverage is based on assuming continuous responses. It's not necessarily very useful; the range of 5 values will tend to be quite wide. The calculation of the coverage is mathematically straightforward (see the last paragraph below) but it's also easy to see via simulation. e.g. here's a quick simulation in R: mean(replicate(1000000,between(range(runif(5)),0.5))) [1] 0.937464 (where between is just: function(x, m) x[1]<m & x[2]>m; if you were doing it for a discrete variable you'd want <= and >= and to define your interval to be closed; it doesn't matter in the continuous case) It doesn't really matter how big the population was; this calculation effectively uses infinite population. A small population would not have a lower chance. I used the uniform distribution as a source of continuous random numbers but the same result would apply for any other continuous distribution, since the order relationships are unaltered by any monotonic transformation. With a continuous variable, the probability that all the values lay to the left of the population median would be $\frac12^5 = \frac{1}{32}$. Similarly for them all to be to the right. Consequently the coverage of the range of 5 randomly selected values is $\frac{15}{16} = 0.9375$.
Can you be 93,75% confident from a random sample of only five from a population of 10 000? Yes, this really works, under certain conditions, with a couple of caveats Random selection: You can't just ask any 5 people. It would need to be randomly selected from the population whose median yo
8,068
Can you be 93,75% confident from a random sample of only five from a population of 10 000?
The other answers have this exactly correct, but I'll explain why it seems so surprising. The trick is that the way the problem is posed hides the goalposts a little bit. We know we have a tiny sample and a high-confidence CI, but the problem sort of glosses over the fact that when choosing even just 5 individuals, the width of the "max-min" range will usually be quite large. It should not be terribly surprising that we can confidently claim that the median is within some very large range. We are likely treading into the territory of "statistically significant, but practically useless". Even very small samples can be used to make conclusions of arbitrary statistical confidence simply by relaxing the width of the tested interval. Here, the sampling approach naturally gives us a large interval, which might be the surprising part. A knee-jerk reaction might be to think that a small sample size and high-confidence CI are incompatible and cannot be observed together. But given any sample size at all, you can build a CI of any confidence you want, so long as you make it wide enough. What's surprising here is just how wide of a range you get, on average, when selecting only 5 individuals from the population. Choosing 5 individuals from any distribution at all results in a range that covers, on average, the middle two thirds of the population! And since this method tends to put the range nearer the middle than the extremes of possible values, the chance of containing the median individual is even higher than percentage of the population covered. With that knowledge, it shouldn't be surprising to define a range using a method that usually covers a majority of the population, and be quite confident that the median is in that range. Yes, we have a method that reliably generates a range that contains the median, but that range is so large that it usually contains most other observed values, too. It's already unlikely to pick 5 individuals and find a range that covers less than 50 percentiles of the population, and even less likely to have that sub-majority range land entirely on one side of the median, which is the only way you can avoid containing the median.
Can you be 93,75% confident from a random sample of only five from a population of 10 000?
The other answers have this exactly correct, but I'll explain why it seems so surprising. The trick is that the way the problem is posed hides the goalposts a little bit. We know we have a tiny sample
Can you be 93,75% confident from a random sample of only five from a population of 10 000? The other answers have this exactly correct, but I'll explain why it seems so surprising. The trick is that the way the problem is posed hides the goalposts a little bit. We know we have a tiny sample and a high-confidence CI, but the problem sort of glosses over the fact that when choosing even just 5 individuals, the width of the "max-min" range will usually be quite large. It should not be terribly surprising that we can confidently claim that the median is within some very large range. We are likely treading into the territory of "statistically significant, but practically useless". Even very small samples can be used to make conclusions of arbitrary statistical confidence simply by relaxing the width of the tested interval. Here, the sampling approach naturally gives us a large interval, which might be the surprising part. A knee-jerk reaction might be to think that a small sample size and high-confidence CI are incompatible and cannot be observed together. But given any sample size at all, you can build a CI of any confidence you want, so long as you make it wide enough. What's surprising here is just how wide of a range you get, on average, when selecting only 5 individuals from the population. Choosing 5 individuals from any distribution at all results in a range that covers, on average, the middle two thirds of the population! And since this method tends to put the range nearer the middle than the extremes of possible values, the chance of containing the median individual is even higher than percentage of the population covered. With that knowledge, it shouldn't be surprising to define a range using a method that usually covers a majority of the population, and be quite confident that the median is in that range. Yes, we have a method that reliably generates a range that contains the median, but that range is so large that it usually contains most other observed values, too. It's already unlikely to pick 5 individuals and find a range that covers less than 50 percentiles of the population, and even less likely to have that sub-majority range land entirely on one side of the median, which is the only way you can avoid containing the median.
Can you be 93,75% confident from a random sample of only five from a population of 10 000? The other answers have this exactly correct, but I'll explain why it seems so surprising. The trick is that the way the problem is posed hides the goalposts a little bit. We know we have a tiny sample
8,069
Is there a plateau-shaped distribution?
You may be looking for distribution known under the names of generalized normal (version 1), Subbotin distribution, or exponential power distribution. It is parametrized by location $\mu$, scale $\sigma$ and shape $\beta$ with pdf $$ \frac{\beta}{2\sigma\Gamma(1/\beta)} \exp\left[-\left(\frac{|x-\mu|}{\sigma}\right)^{\beta}\right] $$ as you can notice, for $\beta=1$ it resembles and converges to Laplace distribution, with $\beta=2$ it converges to normal, and when $\beta = \infty$ to uniform distribution. If you are looking for software that has it implemented, you can check normalp library for R (Mineo and Ruggieri, 2005). What is nice about this package is that, among other things, it implements regression with generalized normally distributed errors, i.e. minimizing $L_p$ norm. Mineo, A. M., & Ruggieri, M. (2005). A software tool for the exponential power distribution: The normalp package. Journal of Statistical Software, 12(4), 1-24.
Is there a plateau-shaped distribution?
You may be looking for distribution known under the names of generalized normal (version 1), Subbotin distribution, or exponential power distribution. It is parametrized by location $\mu$, scale $\sig
Is there a plateau-shaped distribution? You may be looking for distribution known under the names of generalized normal (version 1), Subbotin distribution, or exponential power distribution. It is parametrized by location $\mu$, scale $\sigma$ and shape $\beta$ with pdf $$ \frac{\beta}{2\sigma\Gamma(1/\beta)} \exp\left[-\left(\frac{|x-\mu|}{\sigma}\right)^{\beta}\right] $$ as you can notice, for $\beta=1$ it resembles and converges to Laplace distribution, with $\beta=2$ it converges to normal, and when $\beta = \infty$ to uniform distribution. If you are looking for software that has it implemented, you can check normalp library for R (Mineo and Ruggieri, 2005). What is nice about this package is that, among other things, it implements regression with generalized normally distributed errors, i.e. minimizing $L_p$ norm. Mineo, A. M., & Ruggieri, M. (2005). A software tool for the exponential power distribution: The normalp package. Journal of Statistical Software, 12(4), 1-24.
Is there a plateau-shaped distribution? You may be looking for distribution known under the names of generalized normal (version 1), Subbotin distribution, or exponential power distribution. It is parametrized by location $\mu$, scale $\sig
8,070
Is there a plateau-shaped distribution?
@StrongBad's comment is a really good suggestion. The sum of a uniform RV and gaussian RV can give you exactly what you're looking for if you pick the parameters right. And it actually has a reasonably nice closed form solution. The pdf of this variable is given by the expression: $$\dfrac{1}{4a}\left[\mathrm{erf}\left(\dfrac{x+a}{\sigma\sqrt{2}}\right)-\mathrm{erf}\left(\dfrac{x-a}{\sigma\sqrt{2}}\right) \right]$$ $a$ is the "radius" of the zero-mean uniform RV. $\sigma$ is the standard deviation of the zero-mean gaussian RV.
Is there a plateau-shaped distribution?
@StrongBad's comment is a really good suggestion. The sum of a uniform RV and gaussian RV can give you exactly what you're looking for if you pick the parameters right. And it actually has a reasonabl
Is there a plateau-shaped distribution? @StrongBad's comment is a really good suggestion. The sum of a uniform RV and gaussian RV can give you exactly what you're looking for if you pick the parameters right. And it actually has a reasonably nice closed form solution. The pdf of this variable is given by the expression: $$\dfrac{1}{4a}\left[\mathrm{erf}\left(\dfrac{x+a}{\sigma\sqrt{2}}\right)-\mathrm{erf}\left(\dfrac{x-a}{\sigma\sqrt{2}}\right) \right]$$ $a$ is the "radius" of the zero-mean uniform RV. $\sigma$ is the standard deviation of the zero-mean gaussian RV.
Is there a plateau-shaped distribution? @StrongBad's comment is a really good suggestion. The sum of a uniform RV and gaussian RV can give you exactly what you're looking for if you pick the parameters right. And it actually has a reasonabl
8,071
Is there a plateau-shaped distribution?
There's an infinite number of "plateau-shaped" distributions. Were you after something more specific than "in between the Gaussian and the uniform"? That's somewhat vague. Here's one easy one: you could always stick a half-normal at each end of a uniform: You can control the "width" of the uniform relative to the scale of the normal so you can have wider or narrower plateaus, giving a whole class of distributions, which include the Gaussian and the uniform as limiting cases. The density is: $\frac{h}{\sqrt{2\pi}\sigma} e^{-\frac{1}{2\sigma^2}(x-\mu+w/2)^2} \mathbb{I}_{x\leq \mu-w/2} \\ + \:\frac{h}{\sqrt{2\pi}\sigma}\quad\mathbb{I}_{\mu-w/2< x\leq \mu+w/2} \\ + \frac{h}{\sqrt{2\pi}\sigma} e^{-\frac{1}{2\sigma^2}(x-\mu-w/2)^2} \mathbb{I}_{x > \mu+w/2} $ where $h = \frac{1}{1 + w/(\sqrt{2\pi}\sigma)}$ As $\sigma \to 0$ for fixed $w$, we approach the uniform on $(\mu-w/2,\mu+w/2)$ and as $w \to 0$ for fixed $\sigma$ we approach $N(\mu,\sigma^2)$. Here are some examples (with $\mu=0$ in each case): We might perhaps call this density a "Gaussian-tailed uniform".
Is there a plateau-shaped distribution?
There's an infinite number of "plateau-shaped" distributions. Were you after something more specific than "in between the Gaussian and the uniform"? That's somewhat vague. Here's one easy one: you co
Is there a plateau-shaped distribution? There's an infinite number of "plateau-shaped" distributions. Were you after something more specific than "in between the Gaussian and the uniform"? That's somewhat vague. Here's one easy one: you could always stick a half-normal at each end of a uniform: You can control the "width" of the uniform relative to the scale of the normal so you can have wider or narrower plateaus, giving a whole class of distributions, which include the Gaussian and the uniform as limiting cases. The density is: $\frac{h}{\sqrt{2\pi}\sigma} e^{-\frac{1}{2\sigma^2}(x-\mu+w/2)^2} \mathbb{I}_{x\leq \mu-w/2} \\ + \:\frac{h}{\sqrt{2\pi}\sigma}\quad\mathbb{I}_{\mu-w/2< x\leq \mu+w/2} \\ + \frac{h}{\sqrt{2\pi}\sigma} e^{-\frac{1}{2\sigma^2}(x-\mu-w/2)^2} \mathbb{I}_{x > \mu+w/2} $ where $h = \frac{1}{1 + w/(\sqrt{2\pi}\sigma)}$ As $\sigma \to 0$ for fixed $w$, we approach the uniform on $(\mu-w/2,\mu+w/2)$ and as $w \to 0$ for fixed $\sigma$ we approach $N(\mu,\sigma^2)$. Here are some examples (with $\mu=0$ in each case): We might perhaps call this density a "Gaussian-tailed uniform".
Is there a plateau-shaped distribution? There's an infinite number of "plateau-shaped" distributions. Were you after something more specific than "in between the Gaussian and the uniform"? That's somewhat vague. Here's one easy one: you co
8,072
Is there a plateau-shaped distribution?
See my "Devil's tower" distribution in here [1]: $f(x) = 0.3334$, for $|x| < 0.9399$; $f(x) = 0.2945/x^2$, for $0.9399 \leq |x| < 2.3242$; and $f(x) = 0$, for $2.3242 \leq |x|$. The "slip-dress"distribution is even more interesting. It is easy to construct distributions having whatever shape you want. [1]: Westfall, P.H. (2014) "Kurtosis as Peakedness, 1905 – 2014. R.I.P." Am. Stat. 68(3): 191–195. doi:10.1080/00031305.2014.917055 public access pdf: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4321753/pdf/nihms-599845.pdf
Is there a plateau-shaped distribution?
See my "Devil's tower" distribution in here [1]: $f(x) = 0.3334$, for $|x| < 0.9399$; $f(x) = 0.2945/x^2$, for $0.9399 \leq |x| < 2.3242$; and $f(x) = 0$, for $2.3242 \leq |x|$. The "slip-dress"dist
Is there a plateau-shaped distribution? See my "Devil's tower" distribution in here [1]: $f(x) = 0.3334$, for $|x| < 0.9399$; $f(x) = 0.2945/x^2$, for $0.9399 \leq |x| < 2.3242$; and $f(x) = 0$, for $2.3242 \leq |x|$. The "slip-dress"distribution is even more interesting. It is easy to construct distributions having whatever shape you want. [1]: Westfall, P.H. (2014) "Kurtosis as Peakedness, 1905 – 2014. R.I.P." Am. Stat. 68(3): 191–195. doi:10.1080/00031305.2014.917055 public access pdf: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4321753/pdf/nihms-599845.pdf
Is there a plateau-shaped distribution? See my "Devil's tower" distribution in here [1]: $f(x) = 0.3334$, for $|x| < 0.9399$; $f(x) = 0.2945/x^2$, for $0.9399 \leq |x| < 2.3242$; and $f(x) = 0$, for $2.3242 \leq |x|$. The "slip-dress"dist
8,073
Is there a plateau-shaped distribution?
Lots of nice answers. The solution proffered here has 2 features: (i) that it has a particularly simple functional form, and (ii) that the resulting distribution necessarily produces a plateau-shaped pdf (not just as a special case). I'm not sure if this already has a name in the literature, but absent same, let us call it a Plateau distribution with pdf $f(x)$: $$f(x) = k \frac{1}{1 + x^{2 a}} \quad \quad \text{for } x \in \mathbb{R}$$ where: parameter $a$ is a positive integer, and $k$ is a constant of integration: $k = \frac{a}{\pi} \sin \left(\frac{\pi}{2 a}\right)$ Here is a plot of the pdf, for different values of parameter $a$: . As parameter $a$ becomes large, the density tends towards a Uniform(-1,1) distribution. The following plot also compares to a standard Normal (gray dashed):
Is there a plateau-shaped distribution?
Lots of nice answers. The solution proffered here has 2 features: (i) that it has a particularly simple functional form, and (ii) that the resulting distribution necessarily produces a plateau-shaped
Is there a plateau-shaped distribution? Lots of nice answers. The solution proffered here has 2 features: (i) that it has a particularly simple functional form, and (ii) that the resulting distribution necessarily produces a plateau-shaped pdf (not just as a special case). I'm not sure if this already has a name in the literature, but absent same, let us call it a Plateau distribution with pdf $f(x)$: $$f(x) = k \frac{1}{1 + x^{2 a}} \quad \quad \text{for } x \in \mathbb{R}$$ where: parameter $a$ is a positive integer, and $k$ is a constant of integration: $k = \frac{a}{\pi} \sin \left(\frac{\pi}{2 a}\right)$ Here is a plot of the pdf, for different values of parameter $a$: . As parameter $a$ becomes large, the density tends towards a Uniform(-1,1) distribution. The following plot also compares to a standard Normal (gray dashed):
Is there a plateau-shaped distribution? Lots of nice answers. The solution proffered here has 2 features: (i) that it has a particularly simple functional form, and (ii) that the resulting distribution necessarily produces a plateau-shaped
8,074
Is there a plateau-shaped distribution?
Another one (EDIT: I simplified it now. EDIT2: I simplified it even further, though now the picture doesn't really reflect this exact equation): $$f(x) = \frac{1}{3 \cdot \alpha} \cdot \log{\left( \frac{\cosh{\left(\alpha \cdot a\right)}+ \cosh{\left(\alpha \cdot x\right)}} {\cosh{\left(\alpha \cdot b\right)}+ \cosh{\left(\alpha \cdot x\right)}} \right)} $$ Clunky, I know, but here I took advantage of the fact that $\log(\cosh(x))$ approaches a line as $x$ increases. Basically you have control over how smooth is the transition ($alpha$). If $a = 2$ and $b = 1$ I guarantee it's a valid probability density (sums to 1). If you choose other values then you'll have to renormalize it. Here is some sample code in R: f = function(x, a, b, alpha){ y = log((cosh(2*alpha*pi*a)+cosh(2*alpha*pi*x))/(cosh(2*alpha*pi*b)+cosh(2*alpha*pi*x))) y = y/pi/alpha/6 return(y) } f is our distribution. Let's plot it for a sequence of x plot(0, type = "n", xlim = c(-5,5), ylim = c(0,0.4)) x = seq(-100,100,length.out = 10001L) for(i in 1:10){ y = f(x = x, a = 2, b = 1, alpha = seq(0.1,2, length.out = 10L)[i]); print(paste("integral =", round(sum(0.02*y), 3L))) lines(x, y, type = "l", col = rainbow(10, alpha = 0.5)[i], lwd = 4) } legend("topright", paste("alpha =", round(seq(0.1,2, length.out = 10L), 3L)), col = rainbow(10), lwd = 4) Console output: #[1] "integral = 1" #[1] "integral = 1" #[1] "integral = 1" #[1] "integral = 1" #[1] "integral = 1" #[1] "integral = 1" #[1] "integral = 1" #[1] "integral = NaN" #I suspect underflow, inspecting the plots don't show divergence at all #[1] "integral = NaN" #[1] "integral = NaN" And plot: You could change a and b, approximately the start and end of the slope respectively, but then further normalization would be needed, and I didn't calculate it (that's why I'm using a = 2 and b = 1 in the plot).
Is there a plateau-shaped distribution?
Another one (EDIT: I simplified it now. EDIT2: I simplified it even further, though now the picture doesn't really reflect this exact equation): $$f(x) = \frac{1}{3 \cdot \alpha} \cdot \log{\left( \
Is there a plateau-shaped distribution? Another one (EDIT: I simplified it now. EDIT2: I simplified it even further, though now the picture doesn't really reflect this exact equation): $$f(x) = \frac{1}{3 \cdot \alpha} \cdot \log{\left( \frac{\cosh{\left(\alpha \cdot a\right)}+ \cosh{\left(\alpha \cdot x\right)}} {\cosh{\left(\alpha \cdot b\right)}+ \cosh{\left(\alpha \cdot x\right)}} \right)} $$ Clunky, I know, but here I took advantage of the fact that $\log(\cosh(x))$ approaches a line as $x$ increases. Basically you have control over how smooth is the transition ($alpha$). If $a = 2$ and $b = 1$ I guarantee it's a valid probability density (sums to 1). If you choose other values then you'll have to renormalize it. Here is some sample code in R: f = function(x, a, b, alpha){ y = log((cosh(2*alpha*pi*a)+cosh(2*alpha*pi*x))/(cosh(2*alpha*pi*b)+cosh(2*alpha*pi*x))) y = y/pi/alpha/6 return(y) } f is our distribution. Let's plot it for a sequence of x plot(0, type = "n", xlim = c(-5,5), ylim = c(0,0.4)) x = seq(-100,100,length.out = 10001L) for(i in 1:10){ y = f(x = x, a = 2, b = 1, alpha = seq(0.1,2, length.out = 10L)[i]); print(paste("integral =", round(sum(0.02*y), 3L))) lines(x, y, type = "l", col = rainbow(10, alpha = 0.5)[i], lwd = 4) } legend("topright", paste("alpha =", round(seq(0.1,2, length.out = 10L), 3L)), col = rainbow(10), lwd = 4) Console output: #[1] "integral = 1" #[1] "integral = 1" #[1] "integral = 1" #[1] "integral = 1" #[1] "integral = 1" #[1] "integral = 1" #[1] "integral = 1" #[1] "integral = NaN" #I suspect underflow, inspecting the plots don't show divergence at all #[1] "integral = NaN" #[1] "integral = NaN" And plot: You could change a and b, approximately the start and end of the slope respectively, but then further normalization would be needed, and I didn't calculate it (that's why I'm using a = 2 and b = 1 in the plot).
Is there a plateau-shaped distribution? Another one (EDIT: I simplified it now. EDIT2: I simplified it even further, though now the picture doesn't really reflect this exact equation): $$f(x) = \frac{1}{3 \cdot \alpha} \cdot \log{\left( \
8,075
Is there a plateau-shaped distribution?
If you are looking for something very simple, with a central plateau and the sides of a triangle distribution, you can for instance combine N triangle distributions, N depending on the desired ratio between the plateau and the descent. Why triangles, because their sampling functions already exist in most languages. You randomly sort from one of them. In R that would give: library(triangle) rplateau = function(n=1){ replicate(n, switch(sample(1:3, 1), rtriangle(1, 0, 2), rtriangle(1, 1, 3), rtriangle(1, 2, 4))) } hist(rplateau(1E5), breaks=200)
Is there a plateau-shaped distribution?
If you are looking for something very simple, with a central plateau and the sides of a triangle distribution, you can for instance combine N triangle distributions, N depending on the desired ratio b
Is there a plateau-shaped distribution? If you are looking for something very simple, with a central plateau and the sides of a triangle distribution, you can for instance combine N triangle distributions, N depending on the desired ratio between the plateau and the descent. Why triangles, because their sampling functions already exist in most languages. You randomly sort from one of them. In R that would give: library(triangle) rplateau = function(n=1){ replicate(n, switch(sample(1:3, 1), rtriangle(1, 0, 2), rtriangle(1, 1, 3), rtriangle(1, 2, 4))) } hist(rplateau(1E5), breaks=200)
Is there a plateau-shaped distribution? If you are looking for something very simple, with a central plateau and the sides of a triangle distribution, you can for instance combine N triangle distributions, N depending on the desired ratio b
8,076
Is there a plateau-shaped distribution?
Here's a pretty one: the product of two logistic functions. (1/B) * 1/(1+exp(A*(x-B))) * 1/(1+exp(-A*(x+B))) This has the benefit of not being piecewise. B adjusts the width and A adjusts the steepness of the drop off. Shown below are B=1:6 with A=2. Note: I haven't taken the time to figure out how to properly normalize this.
Is there a plateau-shaped distribution?
Here's a pretty one: the product of two logistic functions. (1/B) * 1/(1+exp(A*(x-B))) * 1/(1+exp(-A*(x+B))) This has the benefit of not being piecewise. B adjusts the width and A adjusts the steepne
Is there a plateau-shaped distribution? Here's a pretty one: the product of two logistic functions. (1/B) * 1/(1+exp(A*(x-B))) * 1/(1+exp(-A*(x+B))) This has the benefit of not being piecewise. B adjusts the width and A adjusts the steepness of the drop off. Shown below are B=1:6 with A=2. Note: I haven't taken the time to figure out how to properly normalize this.
Is there a plateau-shaped distribution? Here's a pretty one: the product of two logistic functions. (1/B) * 1/(1+exp(A*(x-B))) * 1/(1+exp(-A*(x+B))) This has the benefit of not being piecewise. B adjusts the width and A adjusts the steepne
8,077
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence beliefs about subsequent coin flips?
It's called the Gambler's fallacy.
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence belief
It's called the Gambler's fallacy.
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence beliefs about subsequent coin flips? It's called the Gambler's fallacy.
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence belief It's called the Gambler's fallacy.
8,078
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence beliefs about subsequent coin flips?
The first sentence of this question, incorporates another (related) fallacy: "As we all know, if you flip a coin that has an equal chance of landing heads as it does tails, then if you flip the coin many times, half the time you will get heads and half the time you will get tails." No we won't get that, we won't get heads half the time and tails half the time. If we were to get that, then the Gambler would not be so mistaken after all. The mathematical expression for this verbal statement is as follows: For some "large" (but finite) $n'$, we have $n_{h} = \frac {n'}{2}$, where evidently $n_{h}$ denotes the number of times the coin lands heads. Since $n'$ is finite, then $n'+1$ is also finite and a distinct value from $n'$. So what happens after the $n'+1$ flip has been made? Either it landed heads, or not. In both cases, $n_h$ has just stopped being equal to "half the number of tosses". But perhaps what we really meant was an "unimaginably large" $n$? Then we state $$\lim_{n\rightarrow \infty}n_{h} = \frac n{2}$$ But here, the RHS ("right-hand side")contains $n$ which by the LHS ("left-hand-side"), has passed over to infinity. So the RHS is also infinity, and so what this statement says is that the number of times the coin will land heads is equal to infinity, if we toss the coin an infinite number of times (the division by $2$ is negligible) : $$\lim_{n\rightarrow \infty}n_{h} = \frac n{2} = \infty$$ This is an essentially correct, but useless statement, and obviously not what we have in mind. In all, the statement in the question does not hold, irrespective of whether "total tosses" is considered finite or not. Perhaps then we should state $$\lim_{n\rightarrow \infty}\frac {n_{h}}{n} = \frac 1{2} \;\;?$$ First, this translates into "The ratio of the number of landed heads over total number of tosses tends to the value $1/2$ when the number of tosses tends to infinity", which is a different statement - no "half of the total tosses" here. Also, this is how probability is still sometimes perceived -as a deterministic limit of relative frequencies. The problem with this statement is that it contains in the LHS an indeterminate form: both numerator and denominator go to infinity. Hmmm, let's bring in the random variable arsenal. Define a random variable $X_i$ as taking the value $1$ if the $i$-th toss came up heads, $0$ if it came up tails. Then we have $$ \frac {n_{h}}{n} = \frac 1n \sum_{i=1}^nX_i$$ Can we now at least state $$\lim_{n\rightarrow \infty}\frac 1n \sum_{i=1}^nX_i = \frac 1{2} \;\;?$$ No. This is a deterministic limit. It permits all possible realizations of the sequence of the $X$'s, and so it does not even guarantee that a limit will exist, let alone it being equal to $1/2$. In fact such a statement can only be seen as a constraint on the sequence, and it would destroy the independence of the tosses. What we can say, is that this average sum converges in probability ("weakly") to $1/2$ (Bernoulli -Weak Law of Large Numbers), $$\lim_{n\rightarrow \infty}\text {Pr}\left(\left|\frac 1n \sum_{i=1}^nX_i-\frac 12 \right|<\varepsilon\right) =1 , \;\;\;\forall \varepsilon >0$$ and in the case under consideration, that it also converges almost surely ("strongly") (Borel -Strong Law of Large Numbers) $$\text {Pr}\left(\lim_{n\rightarrow \infty}\frac 1n \sum_{i=1}^nX_i=\frac 12 \right) =1 , \;\;\;$$ But these are probabilistic statements about the probability associated with the difference between $n_h/n$ and $1/2$, and not about the limit of the difference $n_h-n_t$ (which according to the false statement should be zero - and it is not). Admittedly, it takes some dedicated intellectual effort to really understand these two statements, and how they differ (in "theory" and in "practice") from some of the previous ones -I do not claim such deep understanding for myself as yet.
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence belief
The first sentence of this question, incorporates another (related) fallacy: "As we all know, if you flip a coin that has an equal chance of landing heads as it does tails, then if you flip the co
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence beliefs about subsequent coin flips? The first sentence of this question, incorporates another (related) fallacy: "As we all know, if you flip a coin that has an equal chance of landing heads as it does tails, then if you flip the coin many times, half the time you will get heads and half the time you will get tails." No we won't get that, we won't get heads half the time and tails half the time. If we were to get that, then the Gambler would not be so mistaken after all. The mathematical expression for this verbal statement is as follows: For some "large" (but finite) $n'$, we have $n_{h} = \frac {n'}{2}$, where evidently $n_{h}$ denotes the number of times the coin lands heads. Since $n'$ is finite, then $n'+1$ is also finite and a distinct value from $n'$. So what happens after the $n'+1$ flip has been made? Either it landed heads, or not. In both cases, $n_h$ has just stopped being equal to "half the number of tosses". But perhaps what we really meant was an "unimaginably large" $n$? Then we state $$\lim_{n\rightarrow \infty}n_{h} = \frac n{2}$$ But here, the RHS ("right-hand side")contains $n$ which by the LHS ("left-hand-side"), has passed over to infinity. So the RHS is also infinity, and so what this statement says is that the number of times the coin will land heads is equal to infinity, if we toss the coin an infinite number of times (the division by $2$ is negligible) : $$\lim_{n\rightarrow \infty}n_{h} = \frac n{2} = \infty$$ This is an essentially correct, but useless statement, and obviously not what we have in mind. In all, the statement in the question does not hold, irrespective of whether "total tosses" is considered finite or not. Perhaps then we should state $$\lim_{n\rightarrow \infty}\frac {n_{h}}{n} = \frac 1{2} \;\;?$$ First, this translates into "The ratio of the number of landed heads over total number of tosses tends to the value $1/2$ when the number of tosses tends to infinity", which is a different statement - no "half of the total tosses" here. Also, this is how probability is still sometimes perceived -as a deterministic limit of relative frequencies. The problem with this statement is that it contains in the LHS an indeterminate form: both numerator and denominator go to infinity. Hmmm, let's bring in the random variable arsenal. Define a random variable $X_i$ as taking the value $1$ if the $i$-th toss came up heads, $0$ if it came up tails. Then we have $$ \frac {n_{h}}{n} = \frac 1n \sum_{i=1}^nX_i$$ Can we now at least state $$\lim_{n\rightarrow \infty}\frac 1n \sum_{i=1}^nX_i = \frac 1{2} \;\;?$$ No. This is a deterministic limit. It permits all possible realizations of the sequence of the $X$'s, and so it does not even guarantee that a limit will exist, let alone it being equal to $1/2$. In fact such a statement can only be seen as a constraint on the sequence, and it would destroy the independence of the tosses. What we can say, is that this average sum converges in probability ("weakly") to $1/2$ (Bernoulli -Weak Law of Large Numbers), $$\lim_{n\rightarrow \infty}\text {Pr}\left(\left|\frac 1n \sum_{i=1}^nX_i-\frac 12 \right|<\varepsilon\right) =1 , \;\;\;\forall \varepsilon >0$$ and in the case under consideration, that it also converges almost surely ("strongly") (Borel -Strong Law of Large Numbers) $$\text {Pr}\left(\lim_{n\rightarrow \infty}\frac 1n \sum_{i=1}^nX_i=\frac 12 \right) =1 , \;\;\;$$ But these are probabilistic statements about the probability associated with the difference between $n_h/n$ and $1/2$, and not about the limit of the difference $n_h-n_t$ (which according to the false statement should be zero - and it is not). Admittedly, it takes some dedicated intellectual effort to really understand these two statements, and how they differ (in "theory" and in "practice") from some of the previous ones -I do not claim such deep understanding for myself as yet.
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence belief The first sentence of this question, incorporates another (related) fallacy: "As we all know, if you flip a coin that has an equal chance of landing heads as it does tails, then if you flip the co
8,079
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence beliefs about subsequent coin flips?
This fallacy has many names. 1) It's probably best known as the Gambler's fallacy 2) it's also sometimes called the 'law of small numbers' (also see here) (because it relates to the idea that the population characteristics must be reflected in small samples) - which I think is a neat name for its contrast with the law of large numbers, but unfortunately the same name is applied to the Poisson distribution (and also sometimes used by mathematicians to mean something else again), so that can be confusing. 3) among people that believe the fallacy it is sometimes called the 'law of averages', which in particular tends to be invoked after a run without some outcome to argue that the outcome is 'due', but of course no such short-run law exists - nothing acts to 'compensate' for an initial imbalance - the only way an initial discrepancy is wiped out is by the volume of later values which themselves have an average of 1/2. Consider an experiment in which a fair coin is tossed repeatedly; let $H_i$ be the number of heads and $T_i$ be the number of tails observed up to the end of the $i$-th trial. Note that $i=H_i+T_i$ It's interesting to note that in the long run (i.e. $n\to\infty$), while $\frac{H_n}{n}$ does converge in probability to $\frac{_1}{^2}$, $E|H_n-T_n|$ grows with increasing $n$ - indeed it grows without bound; there's nothing "pushing it back toward 0".
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence belief
This fallacy has many names. 1) It's probably best known as the Gambler's fallacy 2) it's also sometimes called the 'law of small numbers' (also see here) (because it relates to the idea that the pop
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence beliefs about subsequent coin flips? This fallacy has many names. 1) It's probably best known as the Gambler's fallacy 2) it's also sometimes called the 'law of small numbers' (also see here) (because it relates to the idea that the population characteristics must be reflected in small samples) - which I think is a neat name for its contrast with the law of large numbers, but unfortunately the same name is applied to the Poisson distribution (and also sometimes used by mathematicians to mean something else again), so that can be confusing. 3) among people that believe the fallacy it is sometimes called the 'law of averages', which in particular tends to be invoked after a run without some outcome to argue that the outcome is 'due', but of course no such short-run law exists - nothing acts to 'compensate' for an initial imbalance - the only way an initial discrepancy is wiped out is by the volume of later values which themselves have an average of 1/2. Consider an experiment in which a fair coin is tossed repeatedly; let $H_i$ be the number of heads and $T_i$ be the number of tails observed up to the end of the $i$-th trial. Note that $i=H_i+T_i$ It's interesting to note that in the long run (i.e. $n\to\infty$), while $\frac{H_n}{n}$ does converge in probability to $\frac{_1}{^2}$, $E|H_n-T_n|$ grows with increasing $n$ - indeed it grows without bound; there's nothing "pushing it back toward 0".
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence belief This fallacy has many names. 1) It's probably best known as the Gambler's fallacy 2) it's also sometimes called the 'law of small numbers' (also see here) (because it relates to the idea that the pop
8,080
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence beliefs about subsequent coin flips?
Just to note, that if you get a huge run of heads or tails in a row, you may be better off revisiting your prior assumption assumption that the coin was fair.
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence belief
Just to note, that if you get a huge run of heads or tails in a row, you may be better off revisiting your prior assumption assumption that the coin was fair.
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence beliefs about subsequent coin flips? Just to note, that if you get a huge run of heads or tails in a row, you may be better off revisiting your prior assumption assumption that the coin was fair.
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence belief Just to note, that if you get a huge run of heads or tails in a row, you may be better off revisiting your prior assumption assumption that the coin was fair.
8,081
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence beliefs about subsequent coin flips?
Are you thinking of 'stochastic'? The flip of a fair coin (or the roll of a fair die) is stochastic (ie independent) in the sense that it does not depend on a previous flip of such coin. Assuming a fair con, the fact that the coin had been flipped a hundred times with a hundred heads resulting does not change the fact that the next flip has a 50/50 chance of being heads. In contrast, the likelihood of drawing a certain card drawing a card from a deck of cards without replacement is not stochastic because the likelihood of drawing a certain card will change the likelihood of drawing the card on the next draw (if it was with replacement, it would be stochastic).
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence belief
Are you thinking of 'stochastic'? The flip of a fair coin (or the roll of a fair die) is stochastic (ie independent) in the sense that it does not depend on a previous flip of such coin. Assuming a fa
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence beliefs about subsequent coin flips? Are you thinking of 'stochastic'? The flip of a fair coin (or the roll of a fair die) is stochastic (ie independent) in the sense that it does not depend on a previous flip of such coin. Assuming a fair con, the fact that the coin had been flipped a hundred times with a hundred heads resulting does not change the fact that the next flip has a 50/50 chance of being heads. In contrast, the likelihood of drawing a certain card drawing a card from a deck of cards without replacement is not stochastic because the likelihood of drawing a certain card will change the likelihood of drawing the card on the next draw (if it was with replacement, it would be stochastic).
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence belief Are you thinking of 'stochastic'? The flip of a fair coin (or the roll of a fair die) is stochastic (ie independent) in the sense that it does not depend on a previous flip of such coin. Assuming a fa
8,082
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence beliefs about subsequent coin flips?
Adding on to Glen_b's and Alecos's responses, let's define $X_n$ to be the number of heads in the first $n$ trials. A familiar result using the normal approximation to the binomial is that $X_n$ is approximately $N(n/2, \sqrt{n/4})$. Now, before observing the first 100 tosses, your friend is correct that there is a good chance that $X_{1000}$ will be close to 500. In fact, $P( 469 < X_{1000} < 531) \approx .95$. However, after observing $X_{100} =100$, let's define $Y_{900}$ to be number of heads in the last 900 trials, then $P( 469 < X_{1000} < 531 \mid X_{100}=100) = P( 369 < Y_{900} < 431) \approx .1$ since $Y_{900}$ approximately $N(450, 15)$. Thus, after observing 100 heads in the first 100 trials, there is no longer a high probability of observing close to 500 successes in the first 1000 trials, assuming of course that the coin is fair. Note that this is a concrete example illustrating that an initial imbalance is unlikely to be compensated for in the short run. Further, note that if $n=1,000,000$, then $P(499,020 < X_{1,000,000} < 500,980) \approx .95$ but the impact of the imbalance in the first 100 tosses is negligible in the long run since $P(499,020 < X_{1,000,000} < 500,980 \mid X_{100} = 100) = P( 498,920 < Y_{999,900} < 500880) \approx .949$
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence belief
Adding on to Glen_b's and Alecos's responses, let's define $X_n$ to be the number of heads in the first $n$ trials. A familiar result using the normal approximation to the binomial is that $X_n$ is a
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence beliefs about subsequent coin flips? Adding on to Glen_b's and Alecos's responses, let's define $X_n$ to be the number of heads in the first $n$ trials. A familiar result using the normal approximation to the binomial is that $X_n$ is approximately $N(n/2, \sqrt{n/4})$. Now, before observing the first 100 tosses, your friend is correct that there is a good chance that $X_{1000}$ will be close to 500. In fact, $P( 469 < X_{1000} < 531) \approx .95$. However, after observing $X_{100} =100$, let's define $Y_{900}$ to be number of heads in the last 900 trials, then $P( 469 < X_{1000} < 531 \mid X_{100}=100) = P( 369 < Y_{900} < 431) \approx .1$ since $Y_{900}$ approximately $N(450, 15)$. Thus, after observing 100 heads in the first 100 trials, there is no longer a high probability of observing close to 500 successes in the first 1000 trials, assuming of course that the coin is fair. Note that this is a concrete example illustrating that an initial imbalance is unlikely to be compensated for in the short run. Further, note that if $n=1,000,000$, then $P(499,020 < X_{1,000,000} < 500,980) \approx .95$ but the impact of the imbalance in the first 100 tosses is negligible in the long run since $P(499,020 < X_{1,000,000} < 500,980 \mid X_{100} = 100) = P( 498,920 < Y_{999,900} < 500880) \approx .949$
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence belief Adding on to Glen_b's and Alecos's responses, let's define $X_n$ to be the number of heads in the first $n$ trials. A familiar result using the normal approximation to the binomial is that $X_n$ is a
8,083
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence beliefs about subsequent coin flips?
You are refering to Gambler's fallacy, although this is not entirely correct. Indeed if phrased as "given an assumed fair coin and one observes a given sequence of outcomes, what is the estimation of the elementary probabilities of the coin", this becomes more apparent. Indeed the "fallacy" is related only to (assumed) fair coins, where the various products of probs are equal. However this entails an interpretation that is in contrast to (study of) similar cases with a coin having another (not-symmetric/biased) probability distribution. This is like the fallacy used in many statistical studies where correlation implies causality. But of course it can be a hint of a causality relation or common cause at least in some cases.
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence belief
You are refering to Gambler's fallacy, although this is not entirely correct. Indeed if phrased as "given an assumed fair coin and one observes a given sequence of outcomes, what is the estimation of
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence beliefs about subsequent coin flips? You are refering to Gambler's fallacy, although this is not entirely correct. Indeed if phrased as "given an assumed fair coin and one observes a given sequence of outcomes, what is the estimation of the elementary probabilities of the coin", this becomes more apparent. Indeed the "fallacy" is related only to (assumed) fair coins, where the various products of probs are equal. However this entails an interpretation that is in contrast to (study of) similar cases with a coin having another (not-symmetric/biased) probability distribution. This is like the fallacy used in many statistical studies where correlation implies causality. But of course it can be a hint of a causality relation or common cause at least in some cases.
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence belief You are refering to Gambler's fallacy, although this is not entirely correct. Indeed if phrased as "given an assumed fair coin and one observes a given sequence of outcomes, what is the estimation of
8,084
What can cause PCA to worsen results of a classifier?
Consider a simple case, lifted from a terrific and undervalued article "A Note on the Use of Principal Components in Regression ". Suppose you only have two (scaled and de-meaned) features, denote them $x_1$ and $x_2$ with positive correlation equal to 0.5, aligned in $X$, and a third response variable $Y$ you wish to classify. Suppose that the classification of $Y$ is fully determined by the sign of $x_1 - x_2$. Performing PCA on $X$ results in the new (ordered by variance) features $[x_1 + x_2, x_1 - x_2]$, since $\operatorname{Var}( x_1 + x_2 ) = 1 + 1 + 2\rho > \operatorname{Var}(x_1 - x_2 ) = 2 - 2\rho$. Therefore, if you reduce your dimension to 1 i.e. the first principal component, you are throwing away the exact solution to your classification! The problem occurs because PCA is agnostic to $Y$. Unfortunately, one cannot include $Y$ in the PCA either as this will result in data leakage. Data leakage is when your matrix $X$ is constructed using the target predictors in question, hence any predictions out-of-sample will be impossible. For example: in financial time series, trying to predict the European end-of-day close, which occurs at 11:00am EST, using American end-of-day closes, at 4:00pm EST, is data leakage since the American closes, which occur hours later, have incorporated the prices of European closes.
What can cause PCA to worsen results of a classifier?
Consider a simple case, lifted from a terrific and undervalued article "A Note on the Use of Principal Components in Regression ". Suppose you only have two (scaled and de-meaned) features, denote the
What can cause PCA to worsen results of a classifier? Consider a simple case, lifted from a terrific and undervalued article "A Note on the Use of Principal Components in Regression ". Suppose you only have two (scaled and de-meaned) features, denote them $x_1$ and $x_2$ with positive correlation equal to 0.5, aligned in $X$, and a third response variable $Y$ you wish to classify. Suppose that the classification of $Y$ is fully determined by the sign of $x_1 - x_2$. Performing PCA on $X$ results in the new (ordered by variance) features $[x_1 + x_2, x_1 - x_2]$, since $\operatorname{Var}( x_1 + x_2 ) = 1 + 1 + 2\rho > \operatorname{Var}(x_1 - x_2 ) = 2 - 2\rho$. Therefore, if you reduce your dimension to 1 i.e. the first principal component, you are throwing away the exact solution to your classification! The problem occurs because PCA is agnostic to $Y$. Unfortunately, one cannot include $Y$ in the PCA either as this will result in data leakage. Data leakage is when your matrix $X$ is constructed using the target predictors in question, hence any predictions out-of-sample will be impossible. For example: in financial time series, trying to predict the European end-of-day close, which occurs at 11:00am EST, using American end-of-day closes, at 4:00pm EST, is data leakage since the American closes, which occur hours later, have incorporated the prices of European closes.
What can cause PCA to worsen results of a classifier? Consider a simple case, lifted from a terrific and undervalued article "A Note on the Use of Principal Components in Regression ". Suppose you only have two (scaled and de-meaned) features, denote the
8,085
What can cause PCA to worsen results of a classifier?
There is a simple geometric explanation. Try the following example in R and recall that the first principal component maximizes variance. library(ggplot2) n <- 400 z <- matrix(rnorm(n * 2), nrow = n, ncol = 2) y <- sample(c(-1,1), size = n, replace = TRUE) # PCA helps df.good <- data.frame( y = as.factor(y), x = z + tcrossprod(y, c(10, 0)) ) qplot(x.1, x.2, data = df.good, color = y) + coord_equal() # PCA hurts df.bad <- data.frame( y = as.factor(y), x = z %*% diag(c(10, 1), 2, 2) + tcrossprod(y, c(0, 8)) ) qplot(x.1, x.2, data = df.bad, color = y) + coord_equal() PCA Helps The direction of maximal variance is horizontal, and the classes are separated horizontally. PCA Hurts The direction of maximal variance is horizontal, but the classes are separated vertically
What can cause PCA to worsen results of a classifier?
There is a simple geometric explanation. Try the following example in R and recall that the first principal component maximizes variance. library(ggplot2) n <- 400 z <- matrix(rnorm(n * 2), nrow = n
What can cause PCA to worsen results of a classifier? There is a simple geometric explanation. Try the following example in R and recall that the first principal component maximizes variance. library(ggplot2) n <- 400 z <- matrix(rnorm(n * 2), nrow = n, ncol = 2) y <- sample(c(-1,1), size = n, replace = TRUE) # PCA helps df.good <- data.frame( y = as.factor(y), x = z + tcrossprod(y, c(10, 0)) ) qplot(x.1, x.2, data = df.good, color = y) + coord_equal() # PCA hurts df.bad <- data.frame( y = as.factor(y), x = z %*% diag(c(10, 1), 2, 2) + tcrossprod(y, c(0, 8)) ) qplot(x.1, x.2, data = df.bad, color = y) + coord_equal() PCA Helps The direction of maximal variance is horizontal, and the classes are separated horizontally. PCA Hurts The direction of maximal variance is horizontal, but the classes are separated vertically
What can cause PCA to worsen results of a classifier? There is a simple geometric explanation. Try the following example in R and recall that the first principal component maximizes variance. library(ggplot2) n <- 400 z <- matrix(rnorm(n * 2), nrow = n
8,086
What can cause PCA to worsen results of a classifier?
PCA is linear, It hurts when you want to see non linear dependencies. PCA on images as vectors: A non linear algorithm (NLDR) wich reduced images to 2 dimensions, rotation and scale: More informations: http://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction
What can cause PCA to worsen results of a classifier?
PCA is linear, It hurts when you want to see non linear dependencies. PCA on images as vectors: A non linear algorithm (NLDR) wich reduced images to 2 dimensions, rotation and scale: More informatio
What can cause PCA to worsen results of a classifier? PCA is linear, It hurts when you want to see non linear dependencies. PCA on images as vectors: A non linear algorithm (NLDR) wich reduced images to 2 dimensions, rotation and scale: More informations: http://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction
What can cause PCA to worsen results of a classifier? PCA is linear, It hurts when you want to see non linear dependencies. PCA on images as vectors: A non linear algorithm (NLDR) wich reduced images to 2 dimensions, rotation and scale: More informatio
8,087
What can cause PCA to worsen results of a classifier?
Suppose a simple case with 3 independent variables $x_1,x_2,x_3$ and the output $y$ and suppose now that $x_3=y$ and so you should be able to get a 0 error model. Suppose now that in the training set the variation of $y$ is very small and so also the variation of $x_3$. Now if you run PCA and you decide to select only 2 variables you will obtain a combination of $x_1$ and $x_2$. So the information of $x_3$ that was the only variable able to explain $y$ is lost.
What can cause PCA to worsen results of a classifier?
Suppose a simple case with 3 independent variables $x_1,x_2,x_3$ and the output $y$ and suppose now that $x_3=y$ and so you should be able to get a 0 error model. Suppose now that in the training set
What can cause PCA to worsen results of a classifier? Suppose a simple case with 3 independent variables $x_1,x_2,x_3$ and the output $y$ and suppose now that $x_3=y$ and so you should be able to get a 0 error model. Suppose now that in the training set the variation of $y$ is very small and so also the variation of $x_3$. Now if you run PCA and you decide to select only 2 variables you will obtain a combination of $x_1$ and $x_2$. So the information of $x_3$ that was the only variable able to explain $y$ is lost.
What can cause PCA to worsen results of a classifier? Suppose a simple case with 3 independent variables $x_1,x_2,x_3$ and the output $y$ and suppose now that $x_3=y$ and so you should be able to get a 0 error model. Suppose now that in the training set
8,088
What can cause PCA to worsen results of a classifier?
I see the question already has an accepted answer but wanted to share this paper that talks about using PCA for feature transformation before classification. The take-home message (which is visualised beautifully in @vqv's answer) is: Principal Component Analysis (PCA) is based on extracting the axes on which data shows the highest variability. Although PCA “spreads out” data in the new basis, and can be of great help in unsupervised learning, there is no guarantee that the new axes are consistent with the discriminatory features in a (supervised) classification problem. For those interested, if you look at Section 4. Experimental results, they compare the classification accuracies with 1) the original featuers, 2) PCA transformed features, and 3) combination of both, which was something that was new to me. My conclusion: PCA-based feature transformations allow to summarize the information from a large number of features into a limited number of components, i.e. linear combinations of the original features. However the principal components are often difficult to interpret (not intuitive), and as the empirical results in this paper indicate they usually do not improve the classification performance. P.S: I note that one of the limitations of the paper that sould have been listed was the fact that the authors limited performance assessment of the classifiers to 'accruacy' only, which can be a very biased performance indicator.
What can cause PCA to worsen results of a classifier?
I see the question already has an accepted answer but wanted to share this paper that talks about using PCA for feature transformation before classification. The take-home message (which is visualised
What can cause PCA to worsen results of a classifier? I see the question already has an accepted answer but wanted to share this paper that talks about using PCA for feature transformation before classification. The take-home message (which is visualised beautifully in @vqv's answer) is: Principal Component Analysis (PCA) is based on extracting the axes on which data shows the highest variability. Although PCA “spreads out” data in the new basis, and can be of great help in unsupervised learning, there is no guarantee that the new axes are consistent with the discriminatory features in a (supervised) classification problem. For those interested, if you look at Section 4. Experimental results, they compare the classification accuracies with 1) the original featuers, 2) PCA transformed features, and 3) combination of both, which was something that was new to me. My conclusion: PCA-based feature transformations allow to summarize the information from a large number of features into a limited number of components, i.e. linear combinations of the original features. However the principal components are often difficult to interpret (not intuitive), and as the empirical results in this paper indicate they usually do not improve the classification performance. P.S: I note that one of the limitations of the paper that sould have been listed was the fact that the authors limited performance assessment of the classifiers to 'accruacy' only, which can be a very biased performance indicator.
What can cause PCA to worsen results of a classifier? I see the question already has an accepted answer but wanted to share this paper that talks about using PCA for feature transformation before classification. The take-home message (which is visualised
8,089
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it?
There's a difference between not looking and therefore not seeing any X, and looking and not seeing any X. The latter is 'evidence', the former is not. So the hypothesis under test is "There is a unicorn in that field behind the hill." Alice stays where she is and doesn't look. If there is a unicorn in the field, Alice sees no unicorns. If there is no unicorn in the field, Alice sees no unicorns. P(sees no unicorn | is unicorn) = P(sees no unicorn | no unicorn) = 1. When the hypothesis makes no difference to the observation, the 'evidence' contributed by the observation to belief in the hypothesis is zero. Bob climbs to the top of the hill and looks down on the field, and sees no unicorn. If there is a unicorn in the field, Bob would see it. If there is no unicorn in the field, Bob would see no unicorn. P(sees no unicorn | is unicorn) $\neq$ P(sees no unicorn | no unicorn). When the hypothesis being true or false changes the probability of the observation, evidence is contributed. Looking and seeing no unicorns in the field is positive evidence that there are no unicorns in the field. We can quantify evidence using Bayesian probability. $$P(H_1|O)={P(O|H_1)P(H_1)\over P(O)}$$ $$P(H_2|O)={P(O|H_2)P(H_2)\over P(O)}$$ where $H_1$ is "there is no unicorn in that field". $H_2$ is "there is a unicorn in that field", and $O$ is "I see no unicorn". Divide one by the other: $${P(H_1|O)\over P(H_2|O)}={P(O|H_1)\over P(O|H_2)}{P(H_1)\over P(H_2)}$$ Take logarithms to make the multiplication additive: $$\mathrm{log}{P(H_1|O)\over P(H_2|O)}=\mathrm{log}{P(O|H_1)\over P(O|H_2)}+\mathrm{log}{P(H_1)\over P(H_2)}$$ We interpret this as saying that the Bayesian belief in favour of $H_1$ over $H_2$ after the observation is equal to the evidence in favour of $H_1$ over $H_2$ arising from the observation plus the Bayesian belief in favour of $H_1$ over $H_2$ before the observation. The additive evidence arising from the experiment is quantified as: $$\mathrm{log}{P(O|H_1)\over P(O|H_2)}$$ Alice, by not looking, has no evidence. $\mathrm{log}(1/1)=0$. Bob, by looking and not seeing, does. $\mathrm{log}(1/0)=\infty$, meaning absolute certainty. (Of course, if there is a 10% possibility that there is an invisible unicorn in the field, Bob's evidence is $\mathrm{log}(1/0.1)=1$, if we use base 10 logs. This expresses information using a unit called the hartley.) Rees' dictum is based on people claiming things like that there are no unicorns in the universe based on having looked at only a tiny portion of it and having seen none. Strictly speaking, there is non-zero evidence arising from this, but it's near zero, being related to the log of the volume of space and time searched divided by the volume of the universe. Regarding the issue of null hypothesis experiments, the issue here is that often we are not able to quantify the probability of the observation given an open alternative hypothesis. What is the probability of seeing the reaction if our current understanding is wrong and some unknown physical theory is true? So we set $H_2$ to be a null hypothesis we intend to falsify, such that the probability of the observation given the null is very low. And we presume $H_1$ is restricted to unknown alternative theories in which the observation is reasonably probable. $$\mathrm{log}{P(H_{alt}|O)\over P(H_{null}|O)}=\mathrm{log}{P(H_{alt}|O)\over 0.05}=\mathrm{log}(20\times P(H_{alt}|O))\approx \mathrm{log}20$$ It requires some judicious assumptions about the existence of plausible alternatives, but from a Bayesian point of view doesn't look any different to any other sort of evidence.
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it?
There's a difference between not looking and therefore not seeing any X, and looking and not seeing any X. The latter is 'evidence', the former is not. So the hypothesis under test is "There is a unic
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it? There's a difference between not looking and therefore not seeing any X, and looking and not seeing any X. The latter is 'evidence', the former is not. So the hypothesis under test is "There is a unicorn in that field behind the hill." Alice stays where she is and doesn't look. If there is a unicorn in the field, Alice sees no unicorns. If there is no unicorn in the field, Alice sees no unicorns. P(sees no unicorn | is unicorn) = P(sees no unicorn | no unicorn) = 1. When the hypothesis makes no difference to the observation, the 'evidence' contributed by the observation to belief in the hypothesis is zero. Bob climbs to the top of the hill and looks down on the field, and sees no unicorn. If there is a unicorn in the field, Bob would see it. If there is no unicorn in the field, Bob would see no unicorn. P(sees no unicorn | is unicorn) $\neq$ P(sees no unicorn | no unicorn). When the hypothesis being true or false changes the probability of the observation, evidence is contributed. Looking and seeing no unicorns in the field is positive evidence that there are no unicorns in the field. We can quantify evidence using Bayesian probability. $$P(H_1|O)={P(O|H_1)P(H_1)\over P(O)}$$ $$P(H_2|O)={P(O|H_2)P(H_2)\over P(O)}$$ where $H_1$ is "there is no unicorn in that field". $H_2$ is "there is a unicorn in that field", and $O$ is "I see no unicorn". Divide one by the other: $${P(H_1|O)\over P(H_2|O)}={P(O|H_1)\over P(O|H_2)}{P(H_1)\over P(H_2)}$$ Take logarithms to make the multiplication additive: $$\mathrm{log}{P(H_1|O)\over P(H_2|O)}=\mathrm{log}{P(O|H_1)\over P(O|H_2)}+\mathrm{log}{P(H_1)\over P(H_2)}$$ We interpret this as saying that the Bayesian belief in favour of $H_1$ over $H_2$ after the observation is equal to the evidence in favour of $H_1$ over $H_2$ arising from the observation plus the Bayesian belief in favour of $H_1$ over $H_2$ before the observation. The additive evidence arising from the experiment is quantified as: $$\mathrm{log}{P(O|H_1)\over P(O|H_2)}$$ Alice, by not looking, has no evidence. $\mathrm{log}(1/1)=0$. Bob, by looking and not seeing, does. $\mathrm{log}(1/0)=\infty$, meaning absolute certainty. (Of course, if there is a 10% possibility that there is an invisible unicorn in the field, Bob's evidence is $\mathrm{log}(1/0.1)=1$, if we use base 10 logs. This expresses information using a unit called the hartley.) Rees' dictum is based on people claiming things like that there are no unicorns in the universe based on having looked at only a tiny portion of it and having seen none. Strictly speaking, there is non-zero evidence arising from this, but it's near zero, being related to the log of the volume of space and time searched divided by the volume of the universe. Regarding the issue of null hypothesis experiments, the issue here is that often we are not able to quantify the probability of the observation given an open alternative hypothesis. What is the probability of seeing the reaction if our current understanding is wrong and some unknown physical theory is true? So we set $H_2$ to be a null hypothesis we intend to falsify, such that the probability of the observation given the null is very low. And we presume $H_1$ is restricted to unknown alternative theories in which the observation is reasonably probable. $$\mathrm{log}{P(H_{alt}|O)\over P(H_{null}|O)}=\mathrm{log}{P(H_{alt}|O)\over 0.05}=\mathrm{log}(20\times P(H_{alt}|O))\approx \mathrm{log}20$$ It requires some judicious assumptions about the existence of plausible alternatives, but from a Bayesian point of view doesn't look any different to any other sort of evidence.
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it? There's a difference between not looking and therefore not seeing any X, and looking and not seeing any X. The latter is 'evidence', the former is not. So the hypothesis under test is "There is a unic
8,090
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it?
Bayes says he’s wrong. If observation $O$ would provide support for theory $T$, $$ P(T|O) > P(T) $$ then failure to observe that, which I denote $\bar O$, must disfavor $T$, $$ P(T|\bar O) < P(T) $$ Note that we required no special assumptions. To see this, note that by Bayes’ theorem, the first inequality implies $$ \frac{P(O|T)}{P(O)} > 1 $$ which leads to $$ 1 - P(\bar O|T) > 1 - P(\bar O) $$ and thus $$ \frac{P(\bar O|T)} {P(\bar O)} < 1 $$ By Bayes’ theorem, this leads to the second inequality. In lay terms, it means that if an experiment could find evidence that supports a theory, then performing the experiment but failing to find that evidence must count as evidence against the theory.
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it?
Bayes says he’s wrong. If observation $O$ would provide support for theory $T$, $$ P(T|O) > P(T) $$ then failure to observe that, which I denote $\bar O$, must disfavor $T$, $$ P(T|\bar O) < P(T) $$
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it? Bayes says he’s wrong. If observation $O$ would provide support for theory $T$, $$ P(T|O) > P(T) $$ then failure to observe that, which I denote $\bar O$, must disfavor $T$, $$ P(T|\bar O) < P(T) $$ Note that we required no special assumptions. To see this, note that by Bayes’ theorem, the first inequality implies $$ \frac{P(O|T)}{P(O)} > 1 $$ which leads to $$ 1 - P(\bar O|T) > 1 - P(\bar O) $$ and thus $$ \frac{P(\bar O|T)} {P(\bar O)} < 1 $$ By Bayes’ theorem, this leads to the second inequality. In lay terms, it means that if an experiment could find evidence that supports a theory, then performing the experiment but failing to find that evidence must count as evidence against the theory.
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it? Bayes says he’s wrong. If observation $O$ would provide support for theory $T$, $$ P(T|O) > P(T) $$ then failure to observe that, which I denote $\bar O$, must disfavor $T$, $$ P(T|\bar O) < P(T) $$
8,091
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it?
There's an important point missing here, but it's not strictly speaking a statistical one. Cosmologists can't run experiments. Absence of evidence in cosmology means there's no evidence available to us here on or near earth, observing the cosmos through instruments. Experimental scientists have a lot more freedom to generate data. We could have an absence of evidence because no one has run the appropriate experiment yet. That isn't evidence of absence. We could also have it because the appropriate experiment was run, which should have produced evidence if the phenomenon in question was real, but it didn't. This is evidence for absence. This is the idea formalised by the more mathematical answers here, in one form or another.
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it?
There's an important point missing here, but it's not strictly speaking a statistical one. Cosmologists can't run experiments. Absence of evidence in cosmology means there's no evidence available to u
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it? There's an important point missing here, but it's not strictly speaking a statistical one. Cosmologists can't run experiments. Absence of evidence in cosmology means there's no evidence available to us here on or near earth, observing the cosmos through instruments. Experimental scientists have a lot more freedom to generate data. We could have an absence of evidence because no one has run the appropriate experiment yet. That isn't evidence of absence. We could also have it because the appropriate experiment was run, which should have produced evidence if the phenomenon in question was real, but it didn't. This is evidence for absence. This is the idea formalised by the more mathematical answers here, in one form or another.
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it? There's an important point missing here, but it's not strictly speaking a statistical one. Cosmologists can't run experiments. Absence of evidence in cosmology means there's no evidence available to u
8,092
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it?
$ {\newcommand\P[1]{\operatorname{P} \left(#1\right) }} {\newcommand\PC[2]{\P{#1 \, \middle| \, #2}}} {\newcommand\A[0]{\text{no evidence}}} {\newcommand\B[0]{\text{absence}}} {\newcommand\PA[0]{\P{\A}}} {\newcommand\PB[0]{\P{\B}}} {\newcommand\PAB[0]{\PC{\A}{\B}}} {\newcommand\PBA[0]{\PC{\B}{\A}}} $tl;dr– Absence-of-evidence isn't (significant) evidence-of-absence when the odds of not finding evidence aren't (significantly) affected by absence. Following the derivation in @fblundun's answer, this claim works when $$ \PA ~=~ \PAB \,. $$ Three notable cases: When this equality holds perfectly and unconditionally,$$ \PA ~=~ \PAB \,, $$the claim is unfalsifiable. For example, consider a magical pink unicorn that watches us from outside of reality. But, under no circumstance does it physically interact with us in any way: it exerts no physical effects, including gravitational effects, whatsoever. Since we can never test for the existence of this magical pink unicorn, it's unfalsifiable. It doesn't really exist nor not-exist; neither claim is meaningful. When this equality is approximately true,$$ \PA ~\approx~ \PAB \,, $$then it's the sort of scenario where people tend to say The absence of evidence isn't the evidence of absence. For example, say a historian is trying to find evidence that prehistoric people saw a particular star in the night sky. They may fail to find, say, a cave-drawing with that exact star somehow depicted. But since it seems pretty unlikely for there to have been a cave-drawing of that exact star anyway, that absence-of-evidence (a lack of cave-drawings depicting the star) isn't much evidence-of-absence (evidence that prehistoric people didn't see that star). When this equality isn't approximately true,$$ \require{cancel} \PA ~\cancel{\approx}~ \PAB \,, $$then the claim The absence of evidence isn't the evidence of absence. doesn't really work. Basically, if you wouldn't expect to find evidence of something even if it's true, then there's no sense in saying that it's false when you don't find evidence. Discussion: How Bayesian reasoning works. Naively, Bayesian-logic requires us to recognize that we have some pre-existing beliefs about how likely things are. Then, as we continue to make observations, we can find evidence for/against that adjust the likelihoods up/down. Strictly speaking, anything that makes something less probable is evidence-of-absence. For example, did someone get a perfect score on their elementary-school-spelling-test last week in your hometown? You could check your local news, and if you don't see a story reporting on it, then that's technically evidence against it (since there was some possibility of it being reported, and it not being reported excludes that line of possibilities). But since it was a really small branch of probability-space (a Bayesian tree) that got excluded, this absence-of-evidence isn't (significant) evidence-of-absence. However, absence-of-evidence can be evidence-of-absence. For example, if you have a kid in your household who brags a lot whenever they get a perfect score on a test, but you didn't hear any bragging last week, then that would be relatively significant evidence that they didn't get a perfect score last week. Then there's the odd-ball case: unfalsifiable scenarios. Whenever a proposition being true/false would literally have no effect on reality whatsoever, then it's unfalsifiable. Unfalsifiable claims can never be evidenced, neither for nor against, so the claim that absence-of-evidence isn't evidence-of-absence holds perfectly in those scenarios. This point may come up in discussions about religion.
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it?
$ {\newcommand\P[1]{\operatorname{P} \left(#1\right) }} {\newcommand\PC[2]{\P{#1 \, \middle| \, #2}}} {\newcommand\A[0]{\text{no evidence}}} {\newcommand\B[0]{\text{absence}}} {\newcommand\PA[0]{\P{\A
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it? $ {\newcommand\P[1]{\operatorname{P} \left(#1\right) }} {\newcommand\PC[2]{\P{#1 \, \middle| \, #2}}} {\newcommand\A[0]{\text{no evidence}}} {\newcommand\B[0]{\text{absence}}} {\newcommand\PA[0]{\P{\A}}} {\newcommand\PB[0]{\P{\B}}} {\newcommand\PAB[0]{\PC{\A}{\B}}} {\newcommand\PBA[0]{\PC{\B}{\A}}} $tl;dr– Absence-of-evidence isn't (significant) evidence-of-absence when the odds of not finding evidence aren't (significantly) affected by absence. Following the derivation in @fblundun's answer, this claim works when $$ \PA ~=~ \PAB \,. $$ Three notable cases: When this equality holds perfectly and unconditionally,$$ \PA ~=~ \PAB \,, $$the claim is unfalsifiable. For example, consider a magical pink unicorn that watches us from outside of reality. But, under no circumstance does it physically interact with us in any way: it exerts no physical effects, including gravitational effects, whatsoever. Since we can never test for the existence of this magical pink unicorn, it's unfalsifiable. It doesn't really exist nor not-exist; neither claim is meaningful. When this equality is approximately true,$$ \PA ~\approx~ \PAB \,, $$then it's the sort of scenario where people tend to say The absence of evidence isn't the evidence of absence. For example, say a historian is trying to find evidence that prehistoric people saw a particular star in the night sky. They may fail to find, say, a cave-drawing with that exact star somehow depicted. But since it seems pretty unlikely for there to have been a cave-drawing of that exact star anyway, that absence-of-evidence (a lack of cave-drawings depicting the star) isn't much evidence-of-absence (evidence that prehistoric people didn't see that star). When this equality isn't approximately true,$$ \require{cancel} \PA ~\cancel{\approx}~ \PAB \,, $$then the claim The absence of evidence isn't the evidence of absence. doesn't really work. Basically, if you wouldn't expect to find evidence of something even if it's true, then there's no sense in saying that it's false when you don't find evidence. Discussion: How Bayesian reasoning works. Naively, Bayesian-logic requires us to recognize that we have some pre-existing beliefs about how likely things are. Then, as we continue to make observations, we can find evidence for/against that adjust the likelihoods up/down. Strictly speaking, anything that makes something less probable is evidence-of-absence. For example, did someone get a perfect score on their elementary-school-spelling-test last week in your hometown? You could check your local news, and if you don't see a story reporting on it, then that's technically evidence against it (since there was some possibility of it being reported, and it not being reported excludes that line of possibilities). But since it was a really small branch of probability-space (a Bayesian tree) that got excluded, this absence-of-evidence isn't (significant) evidence-of-absence. However, absence-of-evidence can be evidence-of-absence. For example, if you have a kid in your household who brags a lot whenever they get a perfect score on a test, but you didn't hear any bragging last week, then that would be relatively significant evidence that they didn't get a perfect score last week. Then there's the odd-ball case: unfalsifiable scenarios. Whenever a proposition being true/false would literally have no effect on reality whatsoever, then it's unfalsifiable. Unfalsifiable claims can never be evidenced, neither for nor against, so the claim that absence-of-evidence isn't evidence-of-absence holds perfectly in those scenarios. This point may come up in discussions about religion.
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it? $ {\newcommand\P[1]{\operatorname{P} \left(#1\right) }} {\newcommand\PC[2]{\P{#1 \, \middle| \, #2}}} {\newcommand\A[0]{\text{no evidence}}} {\newcommand\B[0]{\text{absence}}} {\newcommand\PA[0]{\P{\A
8,093
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it?
I think that @Nat's answer is good, but understates the importance of the case where $P(\mbox{absence}) \approx P(\mbox{absence} | \mbox{no evidence})$ (or not-quite-but-almost-equivalently $P(\text{no evidence}|\text{absence}) \approx P(\text{no evidence})$). The big problem here is that, as a general rule, it is not known to the experimenter a priori if the above equality holds, and it is therefore not safe in the general case to assume that absence of evidence is evidence of absence. Taken in context, it may be possible to interpret absence of evidence as evidence of absence, but one does not necessarily follow from the other, and one should be very very careful when attempting to interpret absence of evidence as evidence of absence. One scenario where this happens is when evidence is hard to gather for some reason or another. For example, suppose that we would like to study the difference in educational outcomes between textbook A and textbook B. We give textbook A to class A and textbook B to class B, and compare their grades afterward. Well, unfortunately, class A and class B are potentially taught by the same instructor, who has opinions on the textbooks which ultimately affect the outcome of the experiment, xor class A and B are taught by different instructors which affects the outcome of the experiment. Also, two dozen other confounding factors with varying degrees of impact on the outcome of the experiment. Ultimately, the data show nothing because of these things, and there is an absence of evidence differentiating the outcomes from textbook A and B. It's not that we didn't look, and although maybe the data were easy to gather, any potential evidence was difficult to gather. I.e. $P(\text{no evidence}) \approx P(\text{no evidence} | \text{absence}) \approx 1$. In the Bayesian context, if evidence is impossible, then one cannot update one's prior. In the real world, if you're studying gravity in a vacuum, then you maybe don't have to worry about this, but if you're studying just about anything with human subjects, then it is sometimes difficult to know ahead of time whether your experiment even can yield good evidence. In any case, if researchers do an experiment and fail to find an effect, you should seriously contemplate $P(\text{no evidence})$ in the context of the experiment before you decide to interpret an absence of evidence as evidence of absence. As another analogy, you might consider a criminal who covers his tracks well: the detective fails to find evidence not because she didn't look for it, but because the evidence was hard to find. Yet another scenario where this happens very frequently (and somewhat surprisingly) is when $P(\text{absence}) \approx 0$. Using the same example above, consider testing the hypothesis that $H_1 :=$ textbook A gives different educational outcomes by some chosen metric than textbook B (which is an incredibly common sort of hypothesis to test). The opposite (absence) is that textbook A and textbook B have identical outcomes, which is, for most priors, impossible (it's a single point!). Since $P(\text{absence}) \approx P(\text{absence} | \text{no evidence}) \approx 0$, then when our study fails to find an effect, we should be very careful in deciding that this presents evidence of a lack of an effect. Of course, if you're using Bayesian techniques to study the problem, then you won't "fail to find an effect", but in a post hoc Bayesian analysis of the experiment that was run (since many scientists don't do the Bayesian thing), we can't interpret absence of evidence as evidence of absence. So, given an arbitrary study that fails to find an effect, you should not interpret this as strong evidence for a lack of an effect. Without careful consideration of the context of the experiment, you shouldn't even consider it weak evidence for a lack of an effect. I think that this is the essence of the adage: "absence of evidence is not [necessarily] evidence of absence".
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it?
I think that @Nat's answer is good, but understates the importance of the case where $P(\mbox{absence}) \approx P(\mbox{absence} | \mbox{no evidence})$ (or not-quite-but-almost-equivalently $P(\text{n
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it? I think that @Nat's answer is good, but understates the importance of the case where $P(\mbox{absence}) \approx P(\mbox{absence} | \mbox{no evidence})$ (or not-quite-but-almost-equivalently $P(\text{no evidence}|\text{absence}) \approx P(\text{no evidence})$). The big problem here is that, as a general rule, it is not known to the experimenter a priori if the above equality holds, and it is therefore not safe in the general case to assume that absence of evidence is evidence of absence. Taken in context, it may be possible to interpret absence of evidence as evidence of absence, but one does not necessarily follow from the other, and one should be very very careful when attempting to interpret absence of evidence as evidence of absence. One scenario where this happens is when evidence is hard to gather for some reason or another. For example, suppose that we would like to study the difference in educational outcomes between textbook A and textbook B. We give textbook A to class A and textbook B to class B, and compare their grades afterward. Well, unfortunately, class A and class B are potentially taught by the same instructor, who has opinions on the textbooks which ultimately affect the outcome of the experiment, xor class A and B are taught by different instructors which affects the outcome of the experiment. Also, two dozen other confounding factors with varying degrees of impact on the outcome of the experiment. Ultimately, the data show nothing because of these things, and there is an absence of evidence differentiating the outcomes from textbook A and B. It's not that we didn't look, and although maybe the data were easy to gather, any potential evidence was difficult to gather. I.e. $P(\text{no evidence}) \approx P(\text{no evidence} | \text{absence}) \approx 1$. In the Bayesian context, if evidence is impossible, then one cannot update one's prior. In the real world, if you're studying gravity in a vacuum, then you maybe don't have to worry about this, but if you're studying just about anything with human subjects, then it is sometimes difficult to know ahead of time whether your experiment even can yield good evidence. In any case, if researchers do an experiment and fail to find an effect, you should seriously contemplate $P(\text{no evidence})$ in the context of the experiment before you decide to interpret an absence of evidence as evidence of absence. As another analogy, you might consider a criminal who covers his tracks well: the detective fails to find evidence not because she didn't look for it, but because the evidence was hard to find. Yet another scenario where this happens very frequently (and somewhat surprisingly) is when $P(\text{absence}) \approx 0$. Using the same example above, consider testing the hypothesis that $H_1 :=$ textbook A gives different educational outcomes by some chosen metric than textbook B (which is an incredibly common sort of hypothesis to test). The opposite (absence) is that textbook A and textbook B have identical outcomes, which is, for most priors, impossible (it's a single point!). Since $P(\text{absence}) \approx P(\text{absence} | \text{no evidence}) \approx 0$, then when our study fails to find an effect, we should be very careful in deciding that this presents evidence of a lack of an effect. Of course, if you're using Bayesian techniques to study the problem, then you won't "fail to find an effect", but in a post hoc Bayesian analysis of the experiment that was run (since many scientists don't do the Bayesian thing), we can't interpret absence of evidence as evidence of absence. So, given an arbitrary study that fails to find an effect, you should not interpret this as strong evidence for a lack of an effect. Without careful consideration of the context of the experiment, you shouldn't even consider it weak evidence for a lack of an effect. I think that this is the essence of the adage: "absence of evidence is not [necessarily] evidence of absence".
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it? I think that @Nat's answer is good, but understates the importance of the case where $P(\mbox{absence}) \approx P(\mbox{absence} | \mbox{no evidence})$ (or not-quite-but-almost-equivalently $P(\text{n
8,094
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it?
Suppose absence of evidence were not evidence of absence, i.e. P(absence | no evidence) = P(absence) Then by Bayes' Theorem we would have P(absence) = P(no evidence | absence)P(absence)/P(no evidence) Multiply both sides by P(no evidence)/P(absence) to get: P(no evidence) = P(no evidence | absence) which means absence doesn't decrease the probability of evidence, which is absurd.
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it?
Suppose absence of evidence were not evidence of absence, i.e. P(absence | no evidence) = P(absence) Then by Bayes' Theorem we would have P(absence) = P(no evidence | absence)P(absence)/P(no evidence
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it? Suppose absence of evidence were not evidence of absence, i.e. P(absence | no evidence) = P(absence) Then by Bayes' Theorem we would have P(absence) = P(no evidence | absence)P(absence)/P(no evidence) Multiply both sides by P(no evidence)/P(absence) to get: P(no evidence) = P(no evidence | absence) which means absence doesn't decrease the probability of evidence, which is absurd.
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it? Suppose absence of evidence were not evidence of absence, i.e. P(absence | no evidence) = P(absence) Then by Bayes' Theorem we would have P(absence) = P(no evidence | absence)P(absence)/P(no evidence
8,095
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it?
I believe this is a very philosophical question and I doubt Bayesian theory is applicable to it. What do we mean by a "probability" of a dragon in the garage, a teapot between Earth and Mars, or extraterrestrial life? They either exist or not, and it is not a realisation of a random variable. To drive the idea to the extreme, let's take an example from mathematics. Let's define: \begin{align} F &\equiv \text{"Goldbach's conjecture is false", and} \\ n &\equiv \text{"an even number that cannot be partitioned as two primes"} \end{align} As soon as we find a single $n$ satisfying the above definition, Goldbach falls. End of discussion. Or, in "probability" terms: $$ P(F | n) = 1 $$ It is straightforward to show that the above implies: \begin{align} P(F) &= P(n) + P(F|\bar n) P(\bar n), ~ ~ \text{or} \\ P(F|\bar n) &= \frac {P(F) - P(n)} {1 - P(n)} \end{align} Now, if the evidence is hard to find, the conditional and unconditional "probability" of Goldbach being wrong become identical: $$ P(n) \rightarrow 0 \Leftrightarrow P(F|\bar n) \rightarrow P(F) $$ or, in other words, the absence of evidence doesn't affect the probability of whatever we try to prove. This is, however, conditioned on the rarity of evidence. Since we have examined the numbers up to $10^{18}$ and haven't found an $n$ to disprove Goldbach, such numbers, if exist, are likely rare. Therefore, we can keep believing Goldbach. However, as I noted above, Goldbach's conjecture is not a random variable. It is either true or not, so talking probabilities makes no sense here. We can, at best, talk about our subjective belief in its correctness. In general, it is hard to define when the evidence is rare. The only criterion I can imagine is to make honest and serious effort to find it, and fail. E.g. make many precise experiments which still lead to no proof. But, as a counterexample, take the Michelson-Morley experiment: Here, we (actually the physicists) took the absence of evidence as a serious evidence of absence of the aether, eventually leading to special relativity. So, in conclusion, I'm skeptical regarding the practical relevance of your quote. And, btw, according to Quote Investigator, it is much older than Martin Rees.
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it?
I believe this is a very philosophical question and I doubt Bayesian theory is applicable to it. What do we mean by a "probability" of a dragon in the garage, a teapot between Earth and Mars, or extra
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it? I believe this is a very philosophical question and I doubt Bayesian theory is applicable to it. What do we mean by a "probability" of a dragon in the garage, a teapot between Earth and Mars, or extraterrestrial life? They either exist or not, and it is not a realisation of a random variable. To drive the idea to the extreme, let's take an example from mathematics. Let's define: \begin{align} F &\equiv \text{"Goldbach's conjecture is false", and} \\ n &\equiv \text{"an even number that cannot be partitioned as two primes"} \end{align} As soon as we find a single $n$ satisfying the above definition, Goldbach falls. End of discussion. Or, in "probability" terms: $$ P(F | n) = 1 $$ It is straightforward to show that the above implies: \begin{align} P(F) &= P(n) + P(F|\bar n) P(\bar n), ~ ~ \text{or} \\ P(F|\bar n) &= \frac {P(F) - P(n)} {1 - P(n)} \end{align} Now, if the evidence is hard to find, the conditional and unconditional "probability" of Goldbach being wrong become identical: $$ P(n) \rightarrow 0 \Leftrightarrow P(F|\bar n) \rightarrow P(F) $$ or, in other words, the absence of evidence doesn't affect the probability of whatever we try to prove. This is, however, conditioned on the rarity of evidence. Since we have examined the numbers up to $10^{18}$ and haven't found an $n$ to disprove Goldbach, such numbers, if exist, are likely rare. Therefore, we can keep believing Goldbach. However, as I noted above, Goldbach's conjecture is not a random variable. It is either true or not, so talking probabilities makes no sense here. We can, at best, talk about our subjective belief in its correctness. In general, it is hard to define when the evidence is rare. The only criterion I can imagine is to make honest and serious effort to find it, and fail. E.g. make many precise experiments which still lead to no proof. But, as a counterexample, take the Michelson-Morley experiment: Here, we (actually the physicists) took the absence of evidence as a serious evidence of absence of the aether, eventually leading to special relativity. So, in conclusion, I'm skeptical regarding the practical relevance of your quote. And, btw, according to Quote Investigator, it is much older than Martin Rees.
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it? I believe this is a very philosophical question and I doubt Bayesian theory is applicable to it. What do we mean by a "probability" of a dragon in the garage, a teapot between Earth and Mars, or extra
8,096
Shouldn't the joint probability of 2 independent events be equal to zero?
There is a difference between independent events: $\mathbb P(A \cap B) =\mathbb P(A)\,\mathbb P(B)$, i.e. $\mathbb P(A \mid B)= \mathbb P(A)$ so knowing one happened gives no information about whether the other happened mutually disjoint events: $\mathbb P(A \cap B) = 0$, i.e. $\mathbb P(A \mid B)= 0$ so knowing one happened means the other did not happen You asked for a picture. This might help:
Shouldn't the joint probability of 2 independent events be equal to zero?
There is a difference between independent events: $\mathbb P(A \cap B) =\mathbb P(A)\,\mathbb P(B)$, i.e. $\mathbb P(A \mid B)= \mathbb P(A)$ so knowing one happened gives no information about whethe
Shouldn't the joint probability of 2 independent events be equal to zero? There is a difference between independent events: $\mathbb P(A \cap B) =\mathbb P(A)\,\mathbb P(B)$, i.e. $\mathbb P(A \mid B)= \mathbb P(A)$ so knowing one happened gives no information about whether the other happened mutually disjoint events: $\mathbb P(A \cap B) = 0$, i.e. $\mathbb P(A \mid B)= 0$ so knowing one happened means the other did not happen You asked for a picture. This might help:
Shouldn't the joint probability of 2 independent events be equal to zero? There is a difference between independent events: $\mathbb P(A \cap B) =\mathbb P(A)\,\mathbb P(B)$, i.e. $\mathbb P(A \mid B)= \mathbb P(A)$ so knowing one happened gives no information about whethe
8,097
Shouldn't the joint probability of 2 independent events be equal to zero?
What I understood from your question, is that you might have confused independent events with disjoint events. disjoint events: Two events are called disjoint or mutually exclusive if they cannot both happen. For instance, if we roll a die, the outcomes 1 and 2 are disjoint since they cannot both occur. On the other hand, the outcomes 1 and “rolling an odd number” are not disjoint since both occur if the outcome of the roll is a 1. The intersect of such events is always 0. independent events: Two events are independent if knowing the outcome of one provides no useful information about the outcome of the other. For instance, when we roll two dice, the outcome of each is an independent event – knowing the outcome of one roll does not help determining the outcome of the other. Let's build on that example: We roll two dice, a red and a blue. The probability of getting a 1 on the red is given by P(red = 1) = 1/6, and the probability of getting a 1 on the white is given by P(white = 1) = 1/6. It is possible to get their intersect (i.e. both get 1) simply by multiplying them, since they are independent. P(red = 1) x P(white = 1) = 1/6 x 1/6 = 1/36 != 0. In simple words 1/6 of the time the red die is a 1, and 1/6 of those times the white die is 1. To illustrate:
Shouldn't the joint probability of 2 independent events be equal to zero?
What I understood from your question, is that you might have confused independent events with disjoint events. disjoint events: Two events are called disjoint or mutually exclusive if they cannot both
Shouldn't the joint probability of 2 independent events be equal to zero? What I understood from your question, is that you might have confused independent events with disjoint events. disjoint events: Two events are called disjoint or mutually exclusive if they cannot both happen. For instance, if we roll a die, the outcomes 1 and 2 are disjoint since they cannot both occur. On the other hand, the outcomes 1 and “rolling an odd number” are not disjoint since both occur if the outcome of the roll is a 1. The intersect of such events is always 0. independent events: Two events are independent if knowing the outcome of one provides no useful information about the outcome of the other. For instance, when we roll two dice, the outcome of each is an independent event – knowing the outcome of one roll does not help determining the outcome of the other. Let's build on that example: We roll two dice, a red and a blue. The probability of getting a 1 on the red is given by P(red = 1) = 1/6, and the probability of getting a 1 on the white is given by P(white = 1) = 1/6. It is possible to get their intersect (i.e. both get 1) simply by multiplying them, since they are independent. P(red = 1) x P(white = 1) = 1/6 x 1/6 = 1/36 != 0. In simple words 1/6 of the time the red die is a 1, and 1/6 of those times the white die is 1. To illustrate:
Shouldn't the joint probability of 2 independent events be equal to zero? What I understood from your question, is that you might have confused independent events with disjoint events. disjoint events: Two events are called disjoint or mutually exclusive if they cannot both
8,098
Shouldn't the joint probability of 2 independent events be equal to zero?
The confusion of the OP lies on the notions of disjoint events and independent events. One simple and intuitive description of independence is: A and B are independent if knowing that A happened gives you no information about whether or not B happened. Or in other words, A and B are independent if knowing that A happened does not change the probability that B happened. If A and B are disjoint then knowing that A happened is a game changer! Now you would be certain that B did not happen! And so they are not independent. The only way independence and "disjointedness" in this example are the same is when B is the empty set (which has probability 0). In this case A happening does not inform anything on B No pictures but at least some intuition
Shouldn't the joint probability of 2 independent events be equal to zero?
The confusion of the OP lies on the notions of disjoint events and independent events. One simple and intuitive description of independence is: A and B are independent if knowing that A happened give
Shouldn't the joint probability of 2 independent events be equal to zero? The confusion of the OP lies on the notions of disjoint events and independent events. One simple and intuitive description of independence is: A and B are independent if knowing that A happened gives you no information about whether or not B happened. Or in other words, A and B are independent if knowing that A happened does not change the probability that B happened. If A and B are disjoint then knowing that A happened is a game changer! Now you would be certain that B did not happen! And so they are not independent. The only way independence and "disjointedness" in this example are the same is when B is the empty set (which has probability 0). In this case A happening does not inform anything on B No pictures but at least some intuition
Shouldn't the joint probability of 2 independent events be equal to zero? The confusion of the OP lies on the notions of disjoint events and independent events. One simple and intuitive description of independence is: A and B are independent if knowing that A happened give
8,099
In Regression Analysis, why do we call independent variables "independent"?
If we pull back from today's emphasis on machine learning and recall how much of statistical analysis was developed for controlled experimental studies, the phrase "independent variables" makes a good deal of sense. In controlled experimental studies, the choices of a drug and its concentrations, or the choices of a fertilizer and its amounts per acre, are made independently by the investigator. The interest is in how a response variable of interest (e.g., blood pressure, crop yield) depends on these experimental manipulations. Ideally, the characteristics of the independent variables are tightly specified, with essentially no errors in knowing their values. Then standard linear regression, for example, models the differences among values of dependent variables in terms of the values of the independent variables plus residual errors. The same mathematical formalism used for regression in the context of controlled experimental studies also can be applied to analysis of observed data sets with little to no experimental manipulation, so it's perhaps not surprising that the phrase "independent variables" has carried over to such types of studies. But, as others on this page note, that's probably an unfortunate choice, with "predictors" or "features" more appropriate in such contexts.
In Regression Analysis, why do we call independent variables "independent"?
If we pull back from today's emphasis on machine learning and recall how much of statistical analysis was developed for controlled experimental studies, the phrase "independent variables" makes a good
In Regression Analysis, why do we call independent variables "independent"? If we pull back from today's emphasis on machine learning and recall how much of statistical analysis was developed for controlled experimental studies, the phrase "independent variables" makes a good deal of sense. In controlled experimental studies, the choices of a drug and its concentrations, or the choices of a fertilizer and its amounts per acre, are made independently by the investigator. The interest is in how a response variable of interest (e.g., blood pressure, crop yield) depends on these experimental manipulations. Ideally, the characteristics of the independent variables are tightly specified, with essentially no errors in knowing their values. Then standard linear regression, for example, models the differences among values of dependent variables in terms of the values of the independent variables plus residual errors. The same mathematical formalism used for regression in the context of controlled experimental studies also can be applied to analysis of observed data sets with little to no experimental manipulation, so it's perhaps not surprising that the phrase "independent variables" has carried over to such types of studies. But, as others on this page note, that's probably an unfortunate choice, with "predictors" or "features" more appropriate in such contexts.
In Regression Analysis, why do we call independent variables "independent"? If we pull back from today's emphasis on machine learning and recall how much of statistical analysis was developed for controlled experimental studies, the phrase "independent variables" makes a good
8,100
In Regression Analysis, why do we call independent variables "independent"?
In many ways, "independent variable" is an unfortunate choice. The variables need not be independent of each other, and of course need not be independent of the dependent variable $Y$. In teaching and in my book Regression Modeling Strategies I use the word predictor. In some situations that word is not strong enough, but it works well on the average. A full description of the role of the $X$ (right hand side) variables in a statistical model might be too long to use each time: the set of variables or measurements upon which the distribution of $Y$ is conditioned. This is another way of saying the set of variables whose distributions we are currently not interested in, but whose values we treat as constants.
In Regression Analysis, why do we call independent variables "independent"?
In many ways, "independent variable" is an unfortunate choice. The variables need not be independent of each other, and of course need not be independent of the dependent variable $Y$. In teaching a
In Regression Analysis, why do we call independent variables "independent"? In many ways, "independent variable" is an unfortunate choice. The variables need not be independent of each other, and of course need not be independent of the dependent variable $Y$. In teaching and in my book Regression Modeling Strategies I use the word predictor. In some situations that word is not strong enough, but it works well on the average. A full description of the role of the $X$ (right hand side) variables in a statistical model might be too long to use each time: the set of variables or measurements upon which the distribution of $Y$ is conditioned. This is another way of saying the set of variables whose distributions we are currently not interested in, but whose values we treat as constants.
In Regression Analysis, why do we call independent variables "independent"? In many ways, "independent variable" is an unfortunate choice. The variables need not be independent of each other, and of course need not be independent of the dependent variable $Y$. In teaching a