idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
7,201
Why is using squared error the standard when absolute error is more relevant to most problems? [duplicate]
Suppose one rolls one die (numered 1-6), and wants to compute its average deviation from the average value of 3.5. Two rolls would differ by 0.5, two by 1.5, and two by 2.5, for an average deviation of 1.5. If one takes the average of the squares of the values, one would have one deviation of 0.25, one of 2.25, and one of 6.25, for an average of 2.916 (35/12). Now suppose instead of rolling one die, one rolls two. The average deviation would be 1.94 (35/18), and the average square of the deviation would be 5.833 (70/12). If instead of rolling two dice, one wanted to estimate the expected deviation based upon what it was with one die, doubling the linear average single-die deviation (i.e. 1.5) would yield a value of 3, which is much larger than the actual linear average deviation of 1.94. On the other hand, doubling the average square of the deviation when using a single die (2.916) would yield precisely the average square of the deviation when using two dice. In general, the square root of the average of the squares is a more useful number than the average of the squares itself, but if one wants to compute the square root of the average of a bunch of squares, it's easier to keep the values to be added as squares, than to take the square roots whenever reporting them and then have to square them before they can be added or averaged.
Why is using squared error the standard when absolute error is more relevant to most problems? [dupl
Suppose one rolls one die (numered 1-6), and wants to compute its average deviation from the average value of 3.5. Two rolls would differ by 0.5, two by 1.5, and two by 2.5, for an average deviation
Why is using squared error the standard when absolute error is more relevant to most problems? [duplicate] Suppose one rolls one die (numered 1-6), and wants to compute its average deviation from the average value of 3.5. Two rolls would differ by 0.5, two by 1.5, and two by 2.5, for an average deviation of 1.5. If one takes the average of the squares of the values, one would have one deviation of 0.25, one of 2.25, and one of 6.25, for an average of 2.916 (35/12). Now suppose instead of rolling one die, one rolls two. The average deviation would be 1.94 (35/18), and the average square of the deviation would be 5.833 (70/12). If instead of rolling two dice, one wanted to estimate the expected deviation based upon what it was with one die, doubling the linear average single-die deviation (i.e. 1.5) would yield a value of 3, which is much larger than the actual linear average deviation of 1.94. On the other hand, doubling the average square of the deviation when using a single die (2.916) would yield precisely the average square of the deviation when using two dice. In general, the square root of the average of the squares is a more useful number than the average of the squares itself, but if one wants to compute the square root of the average of a bunch of squares, it's easier to keep the values to be added as squares, than to take the square roots whenever reporting them and then have to square them before they can be added or averaged.
Why is using squared error the standard when absolute error is more relevant to most problems? [dupl Suppose one rolls one die (numered 1-6), and wants to compute its average deviation from the average value of 3.5. Two rolls would differ by 0.5, two by 1.5, and two by 2.5, for an average deviation
7,202
Why is using squared error the standard when absolute error is more relevant to most problems? [duplicate]
In my opinion, it boils to that the squared error guarantees a unique solution, easier to work with and hence much more intuition. By only two main assumptions (and linearity of the error term), a quadratic loss function guarantees that the estimated coefficient is the unique minimized. Least-absolute deviations does not have this property. There is always a potential for an infinite number of solutions. Assuming that $\exists\theta_o\in\Theta$ such that $E(y|x)=m(x,\theta_o)$ and $E((m(x,\theta)-m(x,\theta_o)^2)>0$ for all $\theta\neq\theta_o$, then $\theta_o$ is the unique minimizer for non-linear least squares. Proof: Let $y=m(x,\theta_o)+u$ and $E(u|x)=0$. Then $$E_{\theta_o}((y-m(x,\theta))^2)=E_{\theta_o}((y-m(x,\theta_o)+m(x,\theta_0)-m(x,\theta))^2)$$ $$=E_{\theta_o}(u^2)+E_{\theta_o}((m(x,\theta_o)-m(x,\theta))^2)+2E_{\theta_o}(u(m(x,\theta_o)-m(x,\theta))).$$ By the law of iterated expectations, the third term is zero. Therefore $$E_{\theta_o}((y-m(x,\theta))^2)=u^2+E_{\theta_o}((m(x,\theta_o)-m(x,\theta))^2)$$ is uniquely minimized at $\theta_o$. Another nice property is the total law of variance $$Var(Y)=Var_X(E_Y(Y|X))+E_X(Var_Y(Y|X)),$$ which can be read as the variance of the dependent variable is the variance of the fitted value plus the residual's variance. On a more technical note, the asymptotic formulas are much easier for a quadratic loss function. Importantly, the formulas don't depend on the probability density of the error term. Unfortunately, that is not true for least-absolute deviations. Therefore most practitioners end up having to assume independence of the error term (the formula has the conditional density of the error term at 0 conditioned on $x$, which is impossible to estimate($f_{u|x}(0)$)) to estimate $f_u(0)$. And the least rigorous point is that people have an easy time understanding what a mean or expected value is, and the quadratic loss solves for the conditional expectation. Least-absolute deviations soles for the median, which is just harder to interpret. Another reason quantile regressions aren't very popular.
Why is using squared error the standard when absolute error is more relevant to most problems? [dupl
In my opinion, it boils to that the squared error guarantees a unique solution, easier to work with and hence much more intuition. By only two main assumptions (and linearity of the error term), a qu
Why is using squared error the standard when absolute error is more relevant to most problems? [duplicate] In my opinion, it boils to that the squared error guarantees a unique solution, easier to work with and hence much more intuition. By only two main assumptions (and linearity of the error term), a quadratic loss function guarantees that the estimated coefficient is the unique minimized. Least-absolute deviations does not have this property. There is always a potential for an infinite number of solutions. Assuming that $\exists\theta_o\in\Theta$ such that $E(y|x)=m(x,\theta_o)$ and $E((m(x,\theta)-m(x,\theta_o)^2)>0$ for all $\theta\neq\theta_o$, then $\theta_o$ is the unique minimizer for non-linear least squares. Proof: Let $y=m(x,\theta_o)+u$ and $E(u|x)=0$. Then $$E_{\theta_o}((y-m(x,\theta))^2)=E_{\theta_o}((y-m(x,\theta_o)+m(x,\theta_0)-m(x,\theta))^2)$$ $$=E_{\theta_o}(u^2)+E_{\theta_o}((m(x,\theta_o)-m(x,\theta))^2)+2E_{\theta_o}(u(m(x,\theta_o)-m(x,\theta))).$$ By the law of iterated expectations, the third term is zero. Therefore $$E_{\theta_o}((y-m(x,\theta))^2)=u^2+E_{\theta_o}((m(x,\theta_o)-m(x,\theta))^2)$$ is uniquely minimized at $\theta_o$. Another nice property is the total law of variance $$Var(Y)=Var_X(E_Y(Y|X))+E_X(Var_Y(Y|X)),$$ which can be read as the variance of the dependent variable is the variance of the fitted value plus the residual's variance. On a more technical note, the asymptotic formulas are much easier for a quadratic loss function. Importantly, the formulas don't depend on the probability density of the error term. Unfortunately, that is not true for least-absolute deviations. Therefore most practitioners end up having to assume independence of the error term (the formula has the conditional density of the error term at 0 conditioned on $x$, which is impossible to estimate($f_{u|x}(0)$)) to estimate $f_u(0)$. And the least rigorous point is that people have an easy time understanding what a mean or expected value is, and the quadratic loss solves for the conditional expectation. Least-absolute deviations soles for the median, which is just harder to interpret. Another reason quantile regressions aren't very popular.
Why is using squared error the standard when absolute error is more relevant to most problems? [dupl In my opinion, it boils to that the squared error guarantees a unique solution, easier to work with and hence much more intuition. By only two main assumptions (and linearity of the error term), a qu
7,203
Independent variable = Random variable?
There are two common formulations of linear regression. To focus on the concepts, I will abstract them somewhat. The mathematical description is a little more involved than the English description, so let's begin with the latter: Linear regression is a model in which a response $Y$ is assumed to be random with a distribution determined by regressors $X$ via a linear map $\beta(X)$ and, possibly, by other parameters $\theta$. In most cases the set of possible distributions is a location family with parameters $\alpha$ and $\theta$ and $\beta(X)$ gives the parameter $\alpha$. The archetypical example is ordinary regression in which the set of distributions is the Normal family $\mathcal{N}(\mu, \sigma)$ and $\mu=\beta(X)$ is a linear function of the regressors. Because I have not yet described this mathematically, it's still an open question what kinds of mathematical objects $X$, $Y$, $\beta$, and $\theta$ refer to--and I believe that is the main issue in this thread. Although one can make various (equivalent) choices, most will be equivalent to, or special cases, of the following description. Fixed regressors. The regressors are represented as real vectors $X\in\mathbb{R}^p$. The response is a random variable $Y:\Omega\to\mathbb{R}$ (where $\Omega$ is endowed with a sigma field and probability). The model is a function $f:\mathbb{R}\times\Theta\to M^d$ (or, if you like, a set of functions $\mathbb{R}\to M^d$ parameterized by $\Theta$). $M^d$ is a finite dimensional topological (usually second differentiable) submanifold (or submanifold-with-boundary) of dimension $d$ of the space of probability distributions. $f$ is usually taken to be continuous (or sufficiently differentiable). $\Theta\subset\mathbb{R}^{d-1}$ are the "nuisance parameters." It is supposed that the distribution of $Y$ is $f(\beta(X), \theta)$ for some unknown dual vector $\beta\in\mathbb{R}^{p*}$ (the "regression coefficients") and unknown $\theta\in\Theta$. We may write this $$Y \sim f(\beta(X), \theta).$$ Random regressors. The regressors and response are a $p+1$ dimensional vector-valued random variable $Z = (X,Y): \Omega^\prime \to \mathbb{R}^p \times \mathbb{R}$. The model $f$ is the same kind of object as before, but now it gives the conditional probability $$ Y|X \sim f(\beta(X), \theta).$$ The mathematical description is useless without some prescription telling how it is intended to be applied to data. In the fixed regressor case we conceive of $X$ as being specified by the experimenter. Thus it might help to view $\Omega$ as a product $\mathbb{R}^p\times \Omega^\prime$ endowed with a product sigma algebra. The experimenter determines $X$ and nature determines (some unknown, abstract) $\omega\in\Omega^\prime$. In the random regressor case, nature determines $\omega\in\Omega^\prime$, the $X$-component of the random variable $\pi_X(Z(\omega))$ determines $X$ (which is "observed"), and we now have an ordered pair $(X(\omega), \omega)) \in \Omega$ exactly as in the fixed regressor case. The archetypical example of multiple linear regression (which I will express using standard notation for the objects rather than this more general one) is that $$f(\beta(X), \sigma)=\mathcal{N}(\beta(x), \sigma)$$ for some constant $\sigma \in \Theta = \mathbb{R}^{+}$. As $x$ varies throughout $\mathbb{R}^p$, its image differentiably traces out a one-dimensional subset--a curve--in the two-dimensional manifold of Normal distributions. When--in any fashion whatsoever--$\beta$ is estimated as $\hat\beta$ and $\sigma$ as $\hat\sigma$, the value of $\hat\beta(x)$ is the predicted value of $Y$ associated with $x$--whether $x$ is controlled by the experimenter (case 1) or is only observed (case 2). If we either set a value (case 1) or observe a realization (case 2) $x$ of $X$, then the response $Y$ associated with that $X$ is a random variable whose distribution is $\mathcal{N}(\beta(x), \sigma)$, which is unknown but estimated to be $\mathcal{N}(\hat\beta(x), \hat\sigma)$.
Independent variable = Random variable?
There are two common formulations of linear regression. To focus on the concepts, I will abstract them somewhat. The mathematical description is a little more involved than the English description,
Independent variable = Random variable? There are two common formulations of linear regression. To focus on the concepts, I will abstract them somewhat. The mathematical description is a little more involved than the English description, so let's begin with the latter: Linear regression is a model in which a response $Y$ is assumed to be random with a distribution determined by regressors $X$ via a linear map $\beta(X)$ and, possibly, by other parameters $\theta$. In most cases the set of possible distributions is a location family with parameters $\alpha$ and $\theta$ and $\beta(X)$ gives the parameter $\alpha$. The archetypical example is ordinary regression in which the set of distributions is the Normal family $\mathcal{N}(\mu, \sigma)$ and $\mu=\beta(X)$ is a linear function of the regressors. Because I have not yet described this mathematically, it's still an open question what kinds of mathematical objects $X$, $Y$, $\beta$, and $\theta$ refer to--and I believe that is the main issue in this thread. Although one can make various (equivalent) choices, most will be equivalent to, or special cases, of the following description. Fixed regressors. The regressors are represented as real vectors $X\in\mathbb{R}^p$. The response is a random variable $Y:\Omega\to\mathbb{R}$ (where $\Omega$ is endowed with a sigma field and probability). The model is a function $f:\mathbb{R}\times\Theta\to M^d$ (or, if you like, a set of functions $\mathbb{R}\to M^d$ parameterized by $\Theta$). $M^d$ is a finite dimensional topological (usually second differentiable) submanifold (or submanifold-with-boundary) of dimension $d$ of the space of probability distributions. $f$ is usually taken to be continuous (or sufficiently differentiable). $\Theta\subset\mathbb{R}^{d-1}$ are the "nuisance parameters." It is supposed that the distribution of $Y$ is $f(\beta(X), \theta)$ for some unknown dual vector $\beta\in\mathbb{R}^{p*}$ (the "regression coefficients") and unknown $\theta\in\Theta$. We may write this $$Y \sim f(\beta(X), \theta).$$ Random regressors. The regressors and response are a $p+1$ dimensional vector-valued random variable $Z = (X,Y): \Omega^\prime \to \mathbb{R}^p \times \mathbb{R}$. The model $f$ is the same kind of object as before, but now it gives the conditional probability $$ Y|X \sim f(\beta(X), \theta).$$ The mathematical description is useless without some prescription telling how it is intended to be applied to data. In the fixed regressor case we conceive of $X$ as being specified by the experimenter. Thus it might help to view $\Omega$ as a product $\mathbb{R}^p\times \Omega^\prime$ endowed with a product sigma algebra. The experimenter determines $X$ and nature determines (some unknown, abstract) $\omega\in\Omega^\prime$. In the random regressor case, nature determines $\omega\in\Omega^\prime$, the $X$-component of the random variable $\pi_X(Z(\omega))$ determines $X$ (which is "observed"), and we now have an ordered pair $(X(\omega), \omega)) \in \Omega$ exactly as in the fixed regressor case. The archetypical example of multiple linear regression (which I will express using standard notation for the objects rather than this more general one) is that $$f(\beta(X), \sigma)=\mathcal{N}(\beta(x), \sigma)$$ for some constant $\sigma \in \Theta = \mathbb{R}^{+}$. As $x$ varies throughout $\mathbb{R}^p$, its image differentiably traces out a one-dimensional subset--a curve--in the two-dimensional manifold of Normal distributions. When--in any fashion whatsoever--$\beta$ is estimated as $\hat\beta$ and $\sigma$ as $\hat\sigma$, the value of $\hat\beta(x)$ is the predicted value of $Y$ associated with $x$--whether $x$ is controlled by the experimenter (case 1) or is only observed (case 2). If we either set a value (case 1) or observe a realization (case 2) $x$ of $X$, then the response $Y$ associated with that $X$ is a random variable whose distribution is $\mathcal{N}(\beta(x), \sigma)$, which is unknown but estimated to be $\mathcal{N}(\hat\beta(x), \hat\sigma)$.
Independent variable = Random variable? There are two common formulations of linear regression. To focus on the concepts, I will abstract them somewhat. The mathematical description is a little more involved than the English description,
7,204
Independent variable = Random variable?
First of all, @whuber gave an excellent answer. I'll give it a different take, maybe simpler in some sense, also with a reference to a text. MOTIVATION $X$ can be random or fixed in the regression formulation. This depends on your problem. For so called observational studies it has to be random, and for experiments it usually is fixed. Example one. I'm studying the impact of exposure to electron radiation on the hardness of a metal part. So, I take a few samples of the metal part and expose the to varying levels of radiation. My exposure level is X, and it's fixed, because I set to the levels that I chose. I fully control the conditions of the experiment, or at least try to. I can do the same with other parameters, such as temperature and humidity. Example two. You're studying the impact of economy on frequency of occurrences of fraud in credit card applications. So, you regress the fraud event counts on GDP. You do not control GDP, you can't set to a desired level. Moreover, you probably want to look at multivariate regressions, so you have other variables such as unemployment, and now you have a combination of values in X, which you observe, but do not control. In this case X is random. Example three. You are studying the efficacy of new pesticide in field, i.e. not in the lab conditions, but in the actual experimental farm. In this case you can control something, e.g. you can control the amount of pesticide to put. However, you do not control everything, e.g. weather or soil conditions. Ok, you can control the soil to some extent, but not completely. This is an in-between case, where some conditions are observed and some conditions are controlled. There's this entire field of study called experimental design that is really focused on this third case, where agriculture research is one of the biggest applications of it. MATH Here goes the mathematical part of an answer. There's a set of assumptions that are usually presented when studying linear regression, called Gauss-Markov conditions. They are very theoretical and nobody bothers to prove that they hold in any practical set up. However, they are very useful in understanding the limitations of ordinary least squares (OLS) method. So, the set of assumptions is different for random and fixed X, which roughly correspond to observational vs. experimental studies. Roughly, because as I shown in the third example, sometimes we're really in-between the extremes. I found the "Gauss-Markov" theorem section in Encyclopedia of Research Design by Salkind is a good place to start, it's available in Google Books. The differing assumptions of the fixed design are as follows for the usual regression model $Y=X\beta+\varepsilon$: $E[\varepsilon]=0$ Homoscedasticity, $E[\varepsilon^2]=\sigma^2$ No serial correlation, $E[\varepsilon_i,\varepsilon_j]=0$ vs. the same assumptions in the random design: $E[\varepsilon|X]=0$ Homoscedasticity, $E[\varepsilon^2|X]=\sigma^2$ No serial correlation, $E[\varepsilon_i,\varepsilon_j|X]=0$ As you can see the difference is in conditioning the assumptions on the design matrix for the random design. Conditioning makes these stronger assumptions. For instance, we are not just saying, like in fixed design, that the errors have zero mean; in random design we also say they're not dependent on X, covariates.
Independent variable = Random variable?
First of all, @whuber gave an excellent answer. I'll give it a different take, maybe simpler in some sense, also with a reference to a text. MOTIVATION $X$ can be random or fixed in the regression for
Independent variable = Random variable? First of all, @whuber gave an excellent answer. I'll give it a different take, maybe simpler in some sense, also with a reference to a text. MOTIVATION $X$ can be random or fixed in the regression formulation. This depends on your problem. For so called observational studies it has to be random, and for experiments it usually is fixed. Example one. I'm studying the impact of exposure to electron radiation on the hardness of a metal part. So, I take a few samples of the metal part and expose the to varying levels of radiation. My exposure level is X, and it's fixed, because I set to the levels that I chose. I fully control the conditions of the experiment, or at least try to. I can do the same with other parameters, such as temperature and humidity. Example two. You're studying the impact of economy on frequency of occurrences of fraud in credit card applications. So, you regress the fraud event counts on GDP. You do not control GDP, you can't set to a desired level. Moreover, you probably want to look at multivariate regressions, so you have other variables such as unemployment, and now you have a combination of values in X, which you observe, but do not control. In this case X is random. Example three. You are studying the efficacy of new pesticide in field, i.e. not in the lab conditions, but in the actual experimental farm. In this case you can control something, e.g. you can control the amount of pesticide to put. However, you do not control everything, e.g. weather or soil conditions. Ok, you can control the soil to some extent, but not completely. This is an in-between case, where some conditions are observed and some conditions are controlled. There's this entire field of study called experimental design that is really focused on this third case, where agriculture research is one of the biggest applications of it. MATH Here goes the mathematical part of an answer. There's a set of assumptions that are usually presented when studying linear regression, called Gauss-Markov conditions. They are very theoretical and nobody bothers to prove that they hold in any practical set up. However, they are very useful in understanding the limitations of ordinary least squares (OLS) method. So, the set of assumptions is different for random and fixed X, which roughly correspond to observational vs. experimental studies. Roughly, because as I shown in the third example, sometimes we're really in-between the extremes. I found the "Gauss-Markov" theorem section in Encyclopedia of Research Design by Salkind is a good place to start, it's available in Google Books. The differing assumptions of the fixed design are as follows for the usual regression model $Y=X\beta+\varepsilon$: $E[\varepsilon]=0$ Homoscedasticity, $E[\varepsilon^2]=\sigma^2$ No serial correlation, $E[\varepsilon_i,\varepsilon_j]=0$ vs. the same assumptions in the random design: $E[\varepsilon|X]=0$ Homoscedasticity, $E[\varepsilon^2|X]=\sigma^2$ No serial correlation, $E[\varepsilon_i,\varepsilon_j|X]=0$ As you can see the difference is in conditioning the assumptions on the design matrix for the random design. Conditioning makes these stronger assumptions. For instance, we are not just saying, like in fixed design, that the errors have zero mean; in random design we also say they're not dependent on X, covariates.
Independent variable = Random variable? First of all, @whuber gave an excellent answer. I'll give it a different take, maybe simpler in some sense, also with a reference to a text. MOTIVATION $X$ can be random or fixed in the regression for
7,205
Independent variable = Random variable?
In statistics a random variable is quantity that varies randomly in some way. You can find a good discussion in this excellent CV thread: What is meant by a “random variable”? In a regression model, the predictor variables (X-variables, explanatory variables, covariates, etc.) are assumed to be fixed and known. They are not assumed to be random. All of the randomness in the model is assumed to be in the error term. Consider a simple linear regression model as standardly formulated: $$ Y = \beta_0 + \beta_1 X + \varepsilon \\ \text{where } \varepsilon\sim\mathcal N(0, \sigma^2) $$ The error term, $\varepsilon$, is a random variable and is the source of the randomness in the model. As a result of the error term, $Y$ is a random variable as well. But $X$ is not assumed to be a random variable. (Of course, it might be a random variable in reality, but that is not assumed or reflected in the model.)
Independent variable = Random variable?
In statistics a random variable is quantity that varies randomly in some way. You can find a good discussion in this excellent CV thread: What is meant by a “random variable”? In a regression model,
Independent variable = Random variable? In statistics a random variable is quantity that varies randomly in some way. You can find a good discussion in this excellent CV thread: What is meant by a “random variable”? In a regression model, the predictor variables (X-variables, explanatory variables, covariates, etc.) are assumed to be fixed and known. They are not assumed to be random. All of the randomness in the model is assumed to be in the error term. Consider a simple linear regression model as standardly formulated: $$ Y = \beta_0 + \beta_1 X + \varepsilon \\ \text{where } \varepsilon\sim\mathcal N(0, \sigma^2) $$ The error term, $\varepsilon$, is a random variable and is the source of the randomness in the model. As a result of the error term, $Y$ is a random variable as well. But $X$ is not assumed to be a random variable. (Of course, it might be a random variable in reality, but that is not assumed or reflected in the model.)
Independent variable = Random variable? In statistics a random variable is quantity that varies randomly in some way. You can find a good discussion in this excellent CV thread: What is meant by a “random variable”? In a regression model,
7,206
Independent variable = Random variable?
Not sure if I understand the question, but if you're just asking, "must an independent variable always be a random variable", then the answer is no. An independent variable is a variable which is hypothesised to be correlated with the dependent variable. You then test whether this is the case through modelling (presumably regression analysis). There are a lot of complications and "ifs, buts and maybes" here, so I would suggest getting a copy of a basic econometrics or statistics book covering regression analysis and reading it thoroughly, or else getting the class notes from a basic statistics/econometrics course online if possible.
Independent variable = Random variable?
Not sure if I understand the question, but if you're just asking, "must an independent variable always be a random variable", then the answer is no. An independent variable is a variable which is hyp
Independent variable = Random variable? Not sure if I understand the question, but if you're just asking, "must an independent variable always be a random variable", then the answer is no. An independent variable is a variable which is hypothesised to be correlated with the dependent variable. You then test whether this is the case through modelling (presumably regression analysis). There are a lot of complications and "ifs, buts and maybes" here, so I would suggest getting a copy of a basic econometrics or statistics book covering regression analysis and reading it thoroughly, or else getting the class notes from a basic statistics/econometrics course online if possible.
Independent variable = Random variable? Not sure if I understand the question, but if you're just asking, "must an independent variable always be a random variable", then the answer is no. An independent variable is a variable which is hyp
7,207
Will the mean of a set of means always be the same as the mean obtained from the entire set of raw data?
No, the averages of the averages of subsets is not the same as the average of the whole set. It will only be the same value if the subsets are the same sample size. If you want the average of the population, multiply each average by the size of the sample it came from to get the population total, then divide by the total number of data points (population size). See the batting averages example on Simpson’s paradox for a good illustration of why averaging averages does not usually work.
Will the mean of a set of means always be the same as the mean obtained from the entire set of raw d
No, the averages of the averages of subsets is not the same as the average of the whole set. It will only be the same value if the subsets are the same sample size. If you want the average of the popu
Will the mean of a set of means always be the same as the mean obtained from the entire set of raw data? No, the averages of the averages of subsets is not the same as the average of the whole set. It will only be the same value if the subsets are the same sample size. If you want the average of the population, multiply each average by the size of the sample it came from to get the population total, then divide by the total number of data points (population size). See the batting averages example on Simpson’s paradox for a good illustration of why averaging averages does not usually work.
Will the mean of a set of means always be the same as the mean obtained from the entire set of raw d No, the averages of the averages of subsets is not the same as the average of the whole set. It will only be the same value if the subsets are the same sample size. If you want the average of the popu
7,208
Will the mean of a set of means always be the same as the mean obtained from the entire set of raw data?
Let's try it and see if we can figure it out. The following example is coded in R, which is free and will let you reproduce the example, but hopefully the code is self-explanatory: group1 = c(1,2,3) group2 = c(4,5,6,7,8,9) mean(group1) # 2 mean(group2) # 6.5 mean(c(group1, group2)) # 5 mean(c(mean(group1), mean(group2))) # 4.25 So what we see is that you certainly can calculate the mean of the means, but the mean of the means and the mean of all the raw data don't match. We can also try a weighted average using @BilltheLizard's suggestion to use each group's sample size as a weight (the weights are indicated with the w argument): weighted.mean(c(mean(group1), mean(group2)), w=c(3,6)) # 5 This now gives us the same answer.
Will the mean of a set of means always be the same as the mean obtained from the entire set of raw d
Let's try it and see if we can figure it out. The following example is coded in R, which is free and will let you reproduce the example, but hopefully the code is self-explanatory: group1 = c(1,2,3
Will the mean of a set of means always be the same as the mean obtained from the entire set of raw data? Let's try it and see if we can figure it out. The following example is coded in R, which is free and will let you reproduce the example, but hopefully the code is self-explanatory: group1 = c(1,2,3) group2 = c(4,5,6,7,8,9) mean(group1) # 2 mean(group2) # 6.5 mean(c(group1, group2)) # 5 mean(c(mean(group1), mean(group2))) # 4.25 So what we see is that you certainly can calculate the mean of the means, but the mean of the means and the mean of all the raw data don't match. We can also try a weighted average using @BilltheLizard's suggestion to use each group's sample size as a weight (the weights are indicated with the w argument): weighted.mean(c(mean(group1), mean(group2)), w=c(3,6)) # 5 This now gives us the same answer.
Will the mean of a set of means always be the same as the mean obtained from the entire set of raw d Let's try it and see if we can figure it out. The following example is coded in R, which is free and will let you reproduce the example, but hopefully the code is self-explanatory: group1 = c(1,2,3
7,209
Will the mean of a set of means always be the same as the mean obtained from the entire set of raw data?
In general, if you have a set of $m$ groups with respective sizes $n_1,...,n_m$ and means $\bar{x}_1,...,\bar{x}_m$ then the overall sample mean of all the data is: $$\bar{x} = \sum_{k=1}^m \frac{n_k}{n} \cdot \bar{x}_k \quad \quad \quad \quad \quad n = \sum_{i=1}^m n_k.$$ Thus, the overall mean is always a weighted average of the samples means of the groups. In the special case where all the groups are the same size ($n_1 = \cdots = n_m$), all the weights will be the same and so the overall sample mean will be the mean of the group sample means.
Will the mean of a set of means always be the same as the mean obtained from the entire set of raw d
In general, if you have a set of $m$ groups with respective sizes $n_1,...,n_m$ and means $\bar{x}_1,...,\bar{x}_m$ then the overall sample mean of all the data is: $$\bar{x} = \sum_{k=1}^m \frac{n_k}
Will the mean of a set of means always be the same as the mean obtained from the entire set of raw data? In general, if you have a set of $m$ groups with respective sizes $n_1,...,n_m$ and means $\bar{x}_1,...,\bar{x}_m$ then the overall sample mean of all the data is: $$\bar{x} = \sum_{k=1}^m \frac{n_k}{n} \cdot \bar{x}_k \quad \quad \quad \quad \quad n = \sum_{i=1}^m n_k.$$ Thus, the overall mean is always a weighted average of the samples means of the groups. In the special case where all the groups are the same size ($n_1 = \cdots = n_m$), all the weights will be the same and so the overall sample mean will be the mean of the group sample means.
Will the mean of a set of means always be the same as the mean obtained from the entire set of raw d In general, if you have a set of $m$ groups with respective sizes $n_1,...,n_m$ and means $\bar{x}_1,...,\bar{x}_m$ then the overall sample mean of all the data is: $$\bar{x} = \sum_{k=1}^m \frac{n_k}
7,210
Will the mean of a set of means always be the same as the mean obtained from the entire set of raw data?
Just want to give an (extreme) example: if we have a hit rate of (1/10000) in one sample, and a hit rate of (1/2) in another example, then $\sum \frac{hit_i}{total_i} \neq \frac{\sum hit_i}{\sum total_i}$. In the first case (mean of means), we have an "average" hit rate of 0.5001/2 while in the second case (mean of total) we have 3/10003, and these two numbers are not the same. Whether one is more appropriate or correct depends on your use case.
Will the mean of a set of means always be the same as the mean obtained from the entire set of raw d
Just want to give an (extreme) example: if we have a hit rate of (1/10000) in one sample, and a hit rate of (1/2) in another example, then $\sum \frac{hit_i}{total_i} \neq \frac{\sum hit_i}{\sum total
Will the mean of a set of means always be the same as the mean obtained from the entire set of raw data? Just want to give an (extreme) example: if we have a hit rate of (1/10000) in one sample, and a hit rate of (1/2) in another example, then $\sum \frac{hit_i}{total_i} \neq \frac{\sum hit_i}{\sum total_i}$. In the first case (mean of means), we have an "average" hit rate of 0.5001/2 while in the second case (mean of total) we have 3/10003, and these two numbers are not the same. Whether one is more appropriate or correct depends on your use case.
Will the mean of a set of means always be the same as the mean obtained from the entire set of raw d Just want to give an (extreme) example: if we have a hit rate of (1/10000) in one sample, and a hit rate of (1/2) in another example, then $\sum \frac{hit_i}{total_i} \neq \frac{\sum hit_i}{\sum total
7,211
Will the mean of a set of means always be the same as the mean obtained from the entire set of raw data?
Here's a simple counter-example that shows that the relation in the posted question cannot be true in general. Let us begin by defining the function mean that simply takes a set of outcomes and outputs the mean of those outcomes: mean({x_1, ..., x_n}) := (x_1 + ... + x_n)/n, where n = #{x_1, ..., x_n} (number of elements in the set). Assume that your set is the outcomes {1,2,3}. Then, mean({mean({1}), mean({2,3})}) = 1.75, mean({1,2,3}) = 2, mean({mean({1,2}), mean({3})}) = 2.25, that is depending on how you calculate the mean of the means (of the subsets), you can arrive at a value smaller or larger than the overall mean. The above calculations also demonstrate that there is no general order between the mean of the means and the overall mean. In other words, the hypotheses "mean of means is always greater/lesser than or equal to overall mean" are also invalid.
Will the mean of a set of means always be the same as the mean obtained from the entire set of raw d
Here's a simple counter-example that shows that the relation in the posted question cannot be true in general. Let us begin by defining the function mean that simply takes a set of outcomes and output
Will the mean of a set of means always be the same as the mean obtained from the entire set of raw data? Here's a simple counter-example that shows that the relation in the posted question cannot be true in general. Let us begin by defining the function mean that simply takes a set of outcomes and outputs the mean of those outcomes: mean({x_1, ..., x_n}) := (x_1 + ... + x_n)/n, where n = #{x_1, ..., x_n} (number of elements in the set). Assume that your set is the outcomes {1,2,3}. Then, mean({mean({1}), mean({2,3})}) = 1.75, mean({1,2,3}) = 2, mean({mean({1,2}), mean({3})}) = 2.25, that is depending on how you calculate the mean of the means (of the subsets), you can arrive at a value smaller or larger than the overall mean. The above calculations also demonstrate that there is no general order between the mean of the means and the overall mean. In other words, the hypotheses "mean of means is always greater/lesser than or equal to overall mean" are also invalid.
Will the mean of a set of means always be the same as the mean obtained from the entire set of raw d Here's a simple counter-example that shows that the relation in the posted question cannot be true in general. Let us begin by defining the function mean that simply takes a set of outcomes and output
7,212
The moose must flow, but how?
As you have already pointed out, the question is whether you are dealing with a vector field $v$ from your polygon $P$ to $\mathbb{R}^2$, $v:P \to \mathbb R^2$, and since it is supposed to be normalized, your field maps to the unit circle $\mathbf{S}^1$, i.e. $v:P\to\mathbf{S^1}$. First, let's consider the idea of a gradient field: Note that every vector field that is a gradient field must have zero $curl$, $curl(v) = 0$. And since moose are probably expected to, at least sometimes, walk loops, a gradient field might not be an appropriate model. Also, moose tracks will probably cross, which means that you don't even have a proper map from your polygon $P$ to $\mathbf S$, so you don't even have a proper vector field. So then: what could be a proper model? The first step would be to answer the question of what you actually want to achieve, what is your ultimate goal? Do you want to predict where a moose will be in the future? Do you want to have a (time-dependent) "moose density function"? Do you want to classify moose tracks?
The moose must flow, but how?
As you have already pointed out, the question is whether you are dealing with a vector field $v$ from your polygon $P$ to $\mathbb{R}^2$, $v:P \to \mathbb R^2$, and since it is supposed to be normaliz
The moose must flow, but how? As you have already pointed out, the question is whether you are dealing with a vector field $v$ from your polygon $P$ to $\mathbb{R}^2$, $v:P \to \mathbb R^2$, and since it is supposed to be normalized, your field maps to the unit circle $\mathbf{S}^1$, i.e. $v:P\to\mathbf{S^1}$. First, let's consider the idea of a gradient field: Note that every vector field that is a gradient field must have zero $curl$, $curl(v) = 0$. And since moose are probably expected to, at least sometimes, walk loops, a gradient field might not be an appropriate model. Also, moose tracks will probably cross, which means that you don't even have a proper map from your polygon $P$ to $\mathbf S$, so you don't even have a proper vector field. So then: what could be a proper model? The first step would be to answer the question of what you actually want to achieve, what is your ultimate goal? Do you want to predict where a moose will be in the future? Do you want to have a (time-dependent) "moose density function"? Do you want to classify moose tracks?
The moose must flow, but how? As you have already pointed out, the question is whether you are dealing with a vector field $v$ from your polygon $P$ to $\mathbb{R}^2$, $v:P \to \mathbb R^2$, and since it is supposed to be normaliz
7,213
The moose must flow, but how?
Windy moose You may have few data points to really "model" this. But this does not mean that you could not see the patterns in data. The trick is that, instead of a spherical cow, you may generalize your beloved moose as wind, and each footprint as a weather station that indicates the observed direction on each point. With such simplifications in place, you could generate a streamline or quiver plot to see the flow and extract the model.
The moose must flow, but how?
Windy moose You may have few data points to really "model" this. But this does not mean that you could not see the patterns in data. The trick is that, instead of a spherical cow, you may generalize y
The moose must flow, but how? Windy moose You may have few data points to really "model" this. But this does not mean that you could not see the patterns in data. The trick is that, instead of a spherical cow, you may generalize your beloved moose as wind, and each footprint as a weather station that indicates the observed direction on each point. With such simplifications in place, you could generate a streamline or quiver plot to see the flow and extract the model.
The moose must flow, but how? Windy moose You may have few data points to really "model" this. But this does not mean that you could not see the patterns in data. The trick is that, instead of a spherical cow, you may generalize y
7,214
The moose must flow, but how?
My thoughts from an ecologist's perspective, especially in the context of: The ultimate goal is to estimate likely paths that the moose are taking into and then out of the bounded region. Movement ecology I mentioned this in a comment, but movement ecology might be the place to look. It focuses a lot on movement data, particularly from GPS trackers, but it's also a very young field. I'm not sure if the theoretical foundation is there yet to go from something like GPS data (which has temporal information, possibly speed acceleration info as well) to what you have (no temporal information, so you don't know the order of your tracks). Unfortunately, I only have a superficial familiarity with it, so I can't point to specific types of modeling/statistical analyses. But just a comment, you might be able to infer speed (or the magnitude of the vectors) by the stride length if you have enough tracks together in a sequence. Landscape ecology Depending on your goals and other types of data you might have access to, you could look at landscape ecology for inspiration. A very common goal of ecologists is to understand connectivity in landscapes, usually focused on a particular species, which is important for conservation planning. Connectivity in the landscape is directly related to movement. Low connectivity = low movement potential. Enter Circuit Theory (the ecological version, which is based on electrical circuit theory). In principle, if you have environmental data relevant to your species, you can create a resistance (or conductance) map specific for your species. For example, things like water and tree cover might indicate low resistance areas for moose, since that's what they generally need to survive, whereas a boulder field might be high resistance (they are more likely to go around it rather than through it). With that single resistance map, you can then model flow and connectivity to predict where your organism might move. Typically, this is based on a random-walk model due to its simplicity. With environmental data, your moose tracks can potentially be used to create a species distribution model. This in turn can be used as a conductance map for circuit theory. The idea being that if an area is considered "good" for the species, then it's probably easier for them to move through it as well. The problem is that species distribution modeling is a huge and complex topic. The most well known tool in (ecological) circuit theory might be Circuitscape. There are a ton of research papers and reports that utilize it. More recently, someone I know developed a generalization of circuit theory that incorporates absorption (e.g., mortality), and I developed the samc R package for it. Specifically, it allows you to calculate things like the probability of reaching a particular point, how many times an individual is expected to visit a point, how long it's expected to take to reach a point, how long an individual is expected to survive, etc. There is overlap between movement and landscape ecology, including incorporating the movement data with correlated random-walks, which can then be incorporated into circuit theory (something I hope to incorporate as an option in my package in the future).
The moose must flow, but how?
My thoughts from an ecologist's perspective, especially in the context of: The ultimate goal is to estimate likely paths that the moose are taking into and then out of the bounded region. Movement e
The moose must flow, but how? My thoughts from an ecologist's perspective, especially in the context of: The ultimate goal is to estimate likely paths that the moose are taking into and then out of the bounded region. Movement ecology I mentioned this in a comment, but movement ecology might be the place to look. It focuses a lot on movement data, particularly from GPS trackers, but it's also a very young field. I'm not sure if the theoretical foundation is there yet to go from something like GPS data (which has temporal information, possibly speed acceleration info as well) to what you have (no temporal information, so you don't know the order of your tracks). Unfortunately, I only have a superficial familiarity with it, so I can't point to specific types of modeling/statistical analyses. But just a comment, you might be able to infer speed (or the magnitude of the vectors) by the stride length if you have enough tracks together in a sequence. Landscape ecology Depending on your goals and other types of data you might have access to, you could look at landscape ecology for inspiration. A very common goal of ecologists is to understand connectivity in landscapes, usually focused on a particular species, which is important for conservation planning. Connectivity in the landscape is directly related to movement. Low connectivity = low movement potential. Enter Circuit Theory (the ecological version, which is based on electrical circuit theory). In principle, if you have environmental data relevant to your species, you can create a resistance (or conductance) map specific for your species. For example, things like water and tree cover might indicate low resistance areas for moose, since that's what they generally need to survive, whereas a boulder field might be high resistance (they are more likely to go around it rather than through it). With that single resistance map, you can then model flow and connectivity to predict where your organism might move. Typically, this is based on a random-walk model due to its simplicity. With environmental data, your moose tracks can potentially be used to create a species distribution model. This in turn can be used as a conductance map for circuit theory. The idea being that if an area is considered "good" for the species, then it's probably easier for them to move through it as well. The problem is that species distribution modeling is a huge and complex topic. The most well known tool in (ecological) circuit theory might be Circuitscape. There are a ton of research papers and reports that utilize it. More recently, someone I know developed a generalization of circuit theory that incorporates absorption (e.g., mortality), and I developed the samc R package for it. Specifically, it allows you to calculate things like the probability of reaching a particular point, how many times an individual is expected to visit a point, how long it's expected to take to reach a point, how long an individual is expected to survive, etc. There is overlap between movement and landscape ecology, including incorporating the movement data with correlated random-walks, which can then be incorporated into circuit theory (something I hope to incorporate as an option in my package in the future).
The moose must flow, but how? My thoughts from an ecologist's perspective, especially in the context of: The ultimate goal is to estimate likely paths that the moose are taking into and then out of the bounded region. Movement e
7,215
The moose must flow, but how?
Not an answer, just an extended comment. First, if you have no temporal data and it can be assumed that not all tracks were found, some could have been damaged, etc, then it is not possible to exactly recreate the path. Only the approximate, "educated guesses" are possible. If you look at the picture you posted, there are several possibilities for solving it. You have a collection of points visited by the moose. You probably can make an assumption that if two tracks heading in a similar direction are close to each other, they are more likely to follow one after another. If you frame it like this, it is a variation of traveling salesman problem, isn't it? Travelling salesman would find a single path. Given the noisy nature of the data, this might not be the best solution. Another approach might be to simulate possible paths (the direction of the vector tells you how they should start and end) between all the pairs of points, where the importance of each path, or probability of sampling it, would be inversely proportional to the length of the path. In such a case, you would find regions with many overlapping paths or higher importance weight of all the paths within this area, to find the most likely ones. Here "likely" path would be such that resulted from overlapping many simulated paths. This might be more challenging technically (how would you generate the curves? how would you judge if they are plausible?), but to prove yourself that the approach might make sense try drawing the lines by hand first. As you can see from the image below, after drawing a bunch of "random" lines patterns start to emerge. Drawing them by hand is not the best idea, because people are very bad at generating "random" things, we seek and force patterns, so you would fast start generating "random" paths that fit your hypothesis. This is just an example to show how sampling random paths could be useful here.
The moose must flow, but how?
Not an answer, just an extended comment. First, if you have no temporal data and it can be assumed that not all tracks were found, some could have been damaged, etc, then it is not possible to exactly
The moose must flow, but how? Not an answer, just an extended comment. First, if you have no temporal data and it can be assumed that not all tracks were found, some could have been damaged, etc, then it is not possible to exactly recreate the path. Only the approximate, "educated guesses" are possible. If you look at the picture you posted, there are several possibilities for solving it. You have a collection of points visited by the moose. You probably can make an assumption that if two tracks heading in a similar direction are close to each other, they are more likely to follow one after another. If you frame it like this, it is a variation of traveling salesman problem, isn't it? Travelling salesman would find a single path. Given the noisy nature of the data, this might not be the best solution. Another approach might be to simulate possible paths (the direction of the vector tells you how they should start and end) between all the pairs of points, where the importance of each path, or probability of sampling it, would be inversely proportional to the length of the path. In such a case, you would find regions with many overlapping paths or higher importance weight of all the paths within this area, to find the most likely ones. Here "likely" path would be such that resulted from overlapping many simulated paths. This might be more challenging technically (how would you generate the curves? how would you judge if they are plausible?), but to prove yourself that the approach might make sense try drawing the lines by hand first. As you can see from the image below, after drawing a bunch of "random" lines patterns start to emerge. Drawing them by hand is not the best idea, because people are very bad at generating "random" things, we seek and force patterns, so you would fast start generating "random" paths that fit your hypothesis. This is just an example to show how sampling random paths could be useful here.
The moose must flow, but how? Not an answer, just an extended comment. First, if you have no temporal data and it can be assumed that not all tracks were found, some could have been damaged, etc, then it is not possible to exactly
7,216
The moose must flow, but how?
This approach of finding best-fit vector paths through a bounded volume with directed point measurements is the overall principle of Diffusion Tensor Imaging. There is a large volume of methodology and mathematics around finding paths under these constraints. An example of an introduction article: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3163395/ explains the general principle of anisotropic measurements in voxels (or pixels, in your case). More advanced approaches can account for crossing fibers, which is more likely in your case since you only have two dimensions. These methods are generally validated against real-world physical brains, so you can have some confidence that they have validity within their constraints. I hope you can adapt these principles to your moose-tracking problem.
The moose must flow, but how?
This approach of finding best-fit vector paths through a bounded volume with directed point measurements is the overall principle of Diffusion Tensor Imaging. There is a large volume of methodology an
The moose must flow, but how? This approach of finding best-fit vector paths through a bounded volume with directed point measurements is the overall principle of Diffusion Tensor Imaging. There is a large volume of methodology and mathematics around finding paths under these constraints. An example of an introduction article: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3163395/ explains the general principle of anisotropic measurements in voxels (or pixels, in your case). More advanced approaches can account for crossing fibers, which is more likely in your case since you only have two dimensions. These methods are generally validated against real-world physical brains, so you can have some confidence that they have validity within their constraints. I hope you can adapt these principles to your moose-tracking problem.
The moose must flow, but how? This approach of finding best-fit vector paths through a bounded volume with directed point measurements is the overall principle of Diffusion Tensor Imaging. There is a large volume of methodology an
7,217
The moose must flow, but how?
Building on the answer of Betterthan Kwora, here is one possible approach. You can view your vector field as a function $\{(x_i, y_i)\}_{1 \le i \le n} \subset \mathbb{R}^2 \rightarrow [0, 2\pi]$, because each moose vector has unit length. You can use interpolation to extend this to a function defined on the whole of $\mathbb{R}^2$, for example by using radial basis functions. Once you have this extended vector field, you can simulate possible paths. Here is an implementation of the idea in R. First, here is a function to simulate some data. By default, these moose tend to move from east to west: simulate_moose <- function(N, x1=1, y1=1, a1=2, a2=2){ # simulate some data in the rectangle [0, x1] x [0, y1] # N: number of data points # a1, a2: parameters of beta distribution (bias moose direction) # default "ground truth" is that moose are moving westwards in this example # choose start point for each vector x <- runif(N) * x1 y <- runif(N) * y1 # choose a direction for each vector angles <- rbeta(N, a1, a2) * 2 * pi vx <- cos(angles) vy <- sin(angles) list(x=x, y=y, vx=vx, vy=vy) } Here is a function to fit a vector field to moose data: fit_vector_field <- function(moose_data, r=0.2){ # fit vector field using normals # larger r = more smoothing x0 <- moose_data$x y0 <- moose_data$y vx0 <- moose_data$vx vy0 <- moose_data$vy # get angle from vx and vy values in data theta <- acos(vx0) theta[vy0 > 0] <- -theta[vy0 > 0] # convert angles to real number in range (-inf, inf) z <- tan((theta - pi)/2) fitted_field <- function(x, y){ # get weights using Gaussian - invariant to rotations w <- (2 * pi * r^2)^-0.5 * exp(( -(x - x0)^2 -(y - y0)^2)/(2 * r^2)) w <- w/sum(w) # use weights to estimate tan of desired angle at desired point (x, y) z_est <- sum(w * z) # convert back to an angle theta_est <- 2 * atan(z_est) + pi # convert from angle to (vx, vy) direction vector list(vx=cos(theta_est), vy=sin(theta_est)) } fitted_field } Here is a plotting function and an example: plot_vector_field <- function(vector_field, vlength=1, ...){ # plot the moose data using a circular head for the vectors x <- vector_field$x y <- vector_field$y vx <- vector_field$vx * vlength vy <- vector_field$vy * vlength do.call(plot, c(list(x=x, y=y, xlab="", ylab="", cex=0), list(...))) for (i in 1:length(x)){ segments(x[i], y[i], x[i] + vx[i], y[i] + vy[i]) points(x[i] + vx[i], y[i] + vy[i], pch=19) } } simulate_moose_path <- function(fitted_field, start_x, start_y, N_steps, stepsize){ # simulate a path from start point (start_x, start_y) # use N steps of size stepsize x <- y <- rep(0, N_steps) x[1] <- start_x y[1] <- start_y for (i in 2:N_steps){ pred <- fitted_field(x[i-1], y[i-1]) x[i] <- x[i-1] + pred$vx * stepsize y[i] <- y[i-1] + pred$vy * stepsize } list(x=x, y=y) } # example set.seed(42) moose_data <- simulate_moose(60) fitted_field <- fit_vector_field(moose_data) plot_vector_field(moose_data, vlength=0.1, xlim=c(-0.1, 1.1), ylim=c(-0.1, 1.1)) for (i in -1:5){ for (j in -1:5){ path <- simulate_moose_path(fitted_field, i/5, j/5, 50, 0.1) lines(path, col="blue", lwd=2) } } The simulated data: and the simulated paths: Perhaps this naive approach might be useful if you want to get a quick check of the paths you get from more sophisticated/rigorous methods.
The moose must flow, but how?
Building on the answer of Betterthan Kwora, here is one possible approach. You can view your vector field as a function $\{(x_i, y_i)\}_{1 \le i \le n} \subset \mathbb{R}^2 \rightarrow [0, 2\pi]$, bec
The moose must flow, but how? Building on the answer of Betterthan Kwora, here is one possible approach. You can view your vector field as a function $\{(x_i, y_i)\}_{1 \le i \le n} \subset \mathbb{R}^2 \rightarrow [0, 2\pi]$, because each moose vector has unit length. You can use interpolation to extend this to a function defined on the whole of $\mathbb{R}^2$, for example by using radial basis functions. Once you have this extended vector field, you can simulate possible paths. Here is an implementation of the idea in R. First, here is a function to simulate some data. By default, these moose tend to move from east to west: simulate_moose <- function(N, x1=1, y1=1, a1=2, a2=2){ # simulate some data in the rectangle [0, x1] x [0, y1] # N: number of data points # a1, a2: parameters of beta distribution (bias moose direction) # default "ground truth" is that moose are moving westwards in this example # choose start point for each vector x <- runif(N) * x1 y <- runif(N) * y1 # choose a direction for each vector angles <- rbeta(N, a1, a2) * 2 * pi vx <- cos(angles) vy <- sin(angles) list(x=x, y=y, vx=vx, vy=vy) } Here is a function to fit a vector field to moose data: fit_vector_field <- function(moose_data, r=0.2){ # fit vector field using normals # larger r = more smoothing x0 <- moose_data$x y0 <- moose_data$y vx0 <- moose_data$vx vy0 <- moose_data$vy # get angle from vx and vy values in data theta <- acos(vx0) theta[vy0 > 0] <- -theta[vy0 > 0] # convert angles to real number in range (-inf, inf) z <- tan((theta - pi)/2) fitted_field <- function(x, y){ # get weights using Gaussian - invariant to rotations w <- (2 * pi * r^2)^-0.5 * exp(( -(x - x0)^2 -(y - y0)^2)/(2 * r^2)) w <- w/sum(w) # use weights to estimate tan of desired angle at desired point (x, y) z_est <- sum(w * z) # convert back to an angle theta_est <- 2 * atan(z_est) + pi # convert from angle to (vx, vy) direction vector list(vx=cos(theta_est), vy=sin(theta_est)) } fitted_field } Here is a plotting function and an example: plot_vector_field <- function(vector_field, vlength=1, ...){ # plot the moose data using a circular head for the vectors x <- vector_field$x y <- vector_field$y vx <- vector_field$vx * vlength vy <- vector_field$vy * vlength do.call(plot, c(list(x=x, y=y, xlab="", ylab="", cex=0), list(...))) for (i in 1:length(x)){ segments(x[i], y[i], x[i] + vx[i], y[i] + vy[i]) points(x[i] + vx[i], y[i] + vy[i], pch=19) } } simulate_moose_path <- function(fitted_field, start_x, start_y, N_steps, stepsize){ # simulate a path from start point (start_x, start_y) # use N steps of size stepsize x <- y <- rep(0, N_steps) x[1] <- start_x y[1] <- start_y for (i in 2:N_steps){ pred <- fitted_field(x[i-1], y[i-1]) x[i] <- x[i-1] + pred$vx * stepsize y[i] <- y[i-1] + pred$vy * stepsize } list(x=x, y=y) } # example set.seed(42) moose_data <- simulate_moose(60) fitted_field <- fit_vector_field(moose_data) plot_vector_field(moose_data, vlength=0.1, xlim=c(-0.1, 1.1), ylim=c(-0.1, 1.1)) for (i in -1:5){ for (j in -1:5){ path <- simulate_moose_path(fitted_field, i/5, j/5, 50, 0.1) lines(path, col="blue", lwd=2) } } The simulated data: and the simulated paths: Perhaps this naive approach might be useful if you want to get a quick check of the paths you get from more sophisticated/rigorous methods.
The moose must flow, but how? Building on the answer of Betterthan Kwora, here is one possible approach. You can view your vector field as a function $\{(x_i, y_i)\}_{1 \le i \le n} \subset \mathbb{R}^2 \rightarrow [0, 2\pi]$, bec
7,218
The moose must flow, but how?
It appears that you can break this down into separate problems. First, you can attempt to infer a moose movement vector for each point in your polygon. This will take the form of learning a function $f: \mathbb{R}^2 \mapsto \mathbb{R}^2$. Better yet, this would be a stochastic function, yielding a distribution over movement vectors for each location. Second, given your function $f$, you can attempt to infer likely trajectories. To infer trajectories, you'd simply simulate them (with sampling, if your function is stochastic), perhaps with your observations as initial points. There would be complications involving choice of step size, but overall this is not hard. Third, you'd need to fit differential equations to these trajectories. Given a family of ODEs with a certain parameterization, estimation of parameters is also not so hard, with many possible approaches in the literature.
The moose must flow, but how?
It appears that you can break this down into separate problems. First, you can attempt to infer a moose movement vector for each point in your polygon. This will take the form of learning a function $
The moose must flow, but how? It appears that you can break this down into separate problems. First, you can attempt to infer a moose movement vector for each point in your polygon. This will take the form of learning a function $f: \mathbb{R}^2 \mapsto \mathbb{R}^2$. Better yet, this would be a stochastic function, yielding a distribution over movement vectors for each location. Second, given your function $f$, you can attempt to infer likely trajectories. To infer trajectories, you'd simply simulate them (with sampling, if your function is stochastic), perhaps with your observations as initial points. There would be complications involving choice of step size, but overall this is not hard. Third, you'd need to fit differential equations to these trajectories. Given a family of ODEs with a certain parameterization, estimation of parameters is also not so hard, with many possible approaches in the literature.
The moose must flow, but how? It appears that you can break this down into separate problems. First, you can attempt to infer a moose movement vector for each point in your polygon. This will take the form of learning a function $
7,219
The moose must flow, but how?
Moose density While other answers have taken more sophisticated approaches I'd suggest neglecting the vector data for a moment - does your sampling mean that you can estimate the density of moose (regardless of direction) from your observed tracks? That will in itself be worthwhile. You can then add data points using the vectors - if you have your set of points X and vectors V, you have an initial density from X itself, but you can enrich this by using X union X + V union X - V. This may work better if elements of V are not normalised, however. If you have enough data you can also estimate the density of moose travelling in each direction, as a first step towards the wind models mentioned in other answers - this sort of approach isn't fancy but it does handle 'contradictory data' without conflict.
The moose must flow, but how?
Moose density While other answers have taken more sophisticated approaches I'd suggest neglecting the vector data for a moment - does your sampling mean that you can estimate the density of moose (reg
The moose must flow, but how? Moose density While other answers have taken more sophisticated approaches I'd suggest neglecting the vector data for a moment - does your sampling mean that you can estimate the density of moose (regardless of direction) from your observed tracks? That will in itself be worthwhile. You can then add data points using the vectors - if you have your set of points X and vectors V, you have an initial density from X itself, but you can enrich this by using X union X + V union X - V. This may work better if elements of V are not normalised, however. If you have enough data you can also estimate the density of moose travelling in each direction, as a first step towards the wind models mentioned in other answers - this sort of approach isn't fancy but it does handle 'contradictory data' without conflict.
The moose must flow, but how? Moose density While other answers have taken more sophisticated approaches I'd suggest neglecting the vector data for a moment - does your sampling mean that you can estimate the density of moose (reg
7,220
The moose must flow, but how?
Let's first propose a fairly general description of the underlying moose motion, and then consider how the hoofprint observations can help infer the specific dynamics. We start with an assumption that the observations cover either a large enough number of moose, or a long enough period of time, so that a continuous moose distribution is applicable. We can then apply concepts from statistical mechanics such as the Boltzmann equation -- granted that we don't know a priori what drives the motion of moose, so we can't fill in the equivalent of "forces acting on molecules", but we can describe the counting and kinematics given merely that each moose has a position and velocity and follows a continuous path in space. At any given time, the expected number of moose located in a small box $\Delta x\, \Delta y$ around $(x, y)$ and with velocity in a small box $\Delta\xi\, \Delta\eta$ around $(\xi, \eta)$ equals $$\Delta x\, \Delta y\, \Delta\xi\, \Delta\eta\, f(x, y, \xi, \eta),$$ which defines the distribution function $f$. A consequence is that the expected number of moose located in $\Delta x\, \Delta y$ with any velocity equals $$\Delta x\, \Delta y \int_{\mathbb{R}^2} d\xi\, d\eta\, f(x, y, \xi, \eta) \equiv \Delta x\, \Delta y\, \rho(x, y).$$ We call $\rho$ the density of moose. By the definition of velocity, each moose satisfies $dx/dt = \xi$, $dy/dt = \eta$. In a short time interval $\Delta t$, the expected net number of moose that cross (from left to right, minus from right to left) a "vertical" line segment $\Delta y$ around $(x, y)$ is the expected number of moose with any velocity $(\xi, \eta)$ and located in a box $(\xi\, \Delta t)\, \Delta y$, since this box of moose moves horizontally by $\xi\, \Delta t$ and crosses the line segment. Thus, this expected number of crossings equals $$\Delta t\, \Delta y \int_{\mathbb{R}^2} d\xi\, d\eta\, \xi\, f(x, y, \xi, \eta) \equiv \Delta t\, \Delta y\, \alpha(x, y).$$ Likewise, the expected net number of moose that cross (from bottom to top, minus from top to bottom) a "horizontal" line segment $\Delta x$ around $(x, y)$ equals $$\Delta t\, \Delta x \int_{\mathbb{R}^2} d\xi\, d\eta\, \eta\, f(x, y, \xi, \eta) \equiv \Delta t\, \Delta x\, \beta(x, y).$$ We call $(\alpha, \beta)$ the flux of moose. It is a useful vector field that we would like to infer, because it quantifies the net motion of moose in any direction: The crossings of an oblique line segment are given by an appropriate linear combination of $\alpha$ and $\beta$. Now, let's assume both that the moose distribution function is steady over time and that moose are neither created nor destroyed. Then, by setting to zero the expected net number of moose that cross (out of minus into) the four sides of a box $\Delta x\, \Delta y$ around $(x, y)$, we obtain $$\Delta t\, \Delta y\, \bigl(\alpha(x + \tfrac{1}{2}\Delta x, y) - \alpha(x - \tfrac{1}{2}\Delta x, y)\bigr) + \Delta t\, \Delta x\, \bigl(\beta(x, y + \tfrac{1}{2}\Delta y) - \beta(x, y - \tfrac{1}{2}\Delta y)\bigr) = 0.$$ Upon dividing by $\Delta t\, \Delta x\, \Delta y$ and taking the limit of a very small box, it follows that $$\frac{\partial\alpha(x, y)}{\partial x} + \frac{\partial\beta(x, y)}{\partial y} = 0.$$ This "continuity equation" says that the flux has zero divergence. This is the rigorous formulation of "no sources or sinks". Note that sources and sinks could exist even for a vector field that is nowhere zero, so ruling out the symmetric source and sink patterns sketched in the question is not sufficient. Now, what do the hoofprints represent? Each hoofprint tells us that a moose was present at a point $(x, y)$ and had a velocity in a specific direction, $(\xi, \eta) \propto (\cos\theta, \sin\theta)$. Unfortunately, this does not directly allow us to estimate any sort of distribution function, because moose may leave hoofprints at various rates. It would be easier if every moose always took a step every 1 second, say; then we would be uniformly sampling every moose's position and direction. But if moose may tend to pause in a certain area and take more time between steps, then the number of hoofprints will not adequately indicate the increased probability of finding moose there. It may be possible, though, to combine the hoofprint information with the kinematic properties derived above, and obtain a useful characterization of the moose flux. While moose at the same location can move in different directions (crossing paths), suppose they tend to move mostly in a similar direction. This will result in the flux $(\alpha, \beta)$ also tending to point in that direction. In the limit in which moose at each point $(x, y)$ have a unique velocity $(\xi, \eta)$, the flux is $(\alpha, \beta) = (\rho\xi, \rho\eta)$; upon relaxing this condition, we can still use the typical direction of $(\xi, \eta)$ around $(x, y)$ as an approximation of the direction of $(\alpha, \beta)$. In addition, although the sampling of hoofprints is nonuniform as noted above, it is still true that areas with no moose ($\rho = 0$) will have no hoofprints as well as zero flux. Thus, it is useful to include the number of nearby hoofprints in scaling an approximation of $(\alpha, \beta)$. So, we can consider modeling the flux as $$\bigl(\alpha(x, y), \beta(x, y)\bigr) = q(x, y) \sum_i (\cos\theta_i, \sin\theta_i)\, K(x - x_i, y - y_i),$$ i.e., a scalar function $q$ (which accommodates unknown factors such as local moose speed) times a kernel density estimate of the typical local direction. The kernel will tend to generate the noted correlation of flux with local hoofprint density, enabling the function $q$ to be smoother. The choice of kernel shape and width is an empirical matter. For the one unknown function $q(x, y)$, we require one partial differential equation, which was derived above: $$\frac{\partial\alpha(x, y)}{\partial x} + \frac{\partial\beta(x, y)}{\partial y} = 0.$$ That is, we could try using the continuity equation to solve numerically for the unknown magnitude of the flux at each point. Further investigation would be needed regarding boundary conditions and well-posedness. This approach is somewhat similar to how the question proposes treating the local direction $(\cos\theta, \sin\theta)$ as the result of normalizing an underlying vector field, but there is no reason that the underlying field needs to be the gradient of a scalar.
The moose must flow, but how?
Let's first propose a fairly general description of the underlying moose motion, and then consider how the hoofprint observations can help infer the specific dynamics. We start with an assumption that
The moose must flow, but how? Let's first propose a fairly general description of the underlying moose motion, and then consider how the hoofprint observations can help infer the specific dynamics. We start with an assumption that the observations cover either a large enough number of moose, or a long enough period of time, so that a continuous moose distribution is applicable. We can then apply concepts from statistical mechanics such as the Boltzmann equation -- granted that we don't know a priori what drives the motion of moose, so we can't fill in the equivalent of "forces acting on molecules", but we can describe the counting and kinematics given merely that each moose has a position and velocity and follows a continuous path in space. At any given time, the expected number of moose located in a small box $\Delta x\, \Delta y$ around $(x, y)$ and with velocity in a small box $\Delta\xi\, \Delta\eta$ around $(\xi, \eta)$ equals $$\Delta x\, \Delta y\, \Delta\xi\, \Delta\eta\, f(x, y, \xi, \eta),$$ which defines the distribution function $f$. A consequence is that the expected number of moose located in $\Delta x\, \Delta y$ with any velocity equals $$\Delta x\, \Delta y \int_{\mathbb{R}^2} d\xi\, d\eta\, f(x, y, \xi, \eta) \equiv \Delta x\, \Delta y\, \rho(x, y).$$ We call $\rho$ the density of moose. By the definition of velocity, each moose satisfies $dx/dt = \xi$, $dy/dt = \eta$. In a short time interval $\Delta t$, the expected net number of moose that cross (from left to right, minus from right to left) a "vertical" line segment $\Delta y$ around $(x, y)$ is the expected number of moose with any velocity $(\xi, \eta)$ and located in a box $(\xi\, \Delta t)\, \Delta y$, since this box of moose moves horizontally by $\xi\, \Delta t$ and crosses the line segment. Thus, this expected number of crossings equals $$\Delta t\, \Delta y \int_{\mathbb{R}^2} d\xi\, d\eta\, \xi\, f(x, y, \xi, \eta) \equiv \Delta t\, \Delta y\, \alpha(x, y).$$ Likewise, the expected net number of moose that cross (from bottom to top, minus from top to bottom) a "horizontal" line segment $\Delta x$ around $(x, y)$ equals $$\Delta t\, \Delta x \int_{\mathbb{R}^2} d\xi\, d\eta\, \eta\, f(x, y, \xi, \eta) \equiv \Delta t\, \Delta x\, \beta(x, y).$$ We call $(\alpha, \beta)$ the flux of moose. It is a useful vector field that we would like to infer, because it quantifies the net motion of moose in any direction: The crossings of an oblique line segment are given by an appropriate linear combination of $\alpha$ and $\beta$. Now, let's assume both that the moose distribution function is steady over time and that moose are neither created nor destroyed. Then, by setting to zero the expected net number of moose that cross (out of minus into) the four sides of a box $\Delta x\, \Delta y$ around $(x, y)$, we obtain $$\Delta t\, \Delta y\, \bigl(\alpha(x + \tfrac{1}{2}\Delta x, y) - \alpha(x - \tfrac{1}{2}\Delta x, y)\bigr) + \Delta t\, \Delta x\, \bigl(\beta(x, y + \tfrac{1}{2}\Delta y) - \beta(x, y - \tfrac{1}{2}\Delta y)\bigr) = 0.$$ Upon dividing by $\Delta t\, \Delta x\, \Delta y$ and taking the limit of a very small box, it follows that $$\frac{\partial\alpha(x, y)}{\partial x} + \frac{\partial\beta(x, y)}{\partial y} = 0.$$ This "continuity equation" says that the flux has zero divergence. This is the rigorous formulation of "no sources or sinks". Note that sources and sinks could exist even for a vector field that is nowhere zero, so ruling out the symmetric source and sink patterns sketched in the question is not sufficient. Now, what do the hoofprints represent? Each hoofprint tells us that a moose was present at a point $(x, y)$ and had a velocity in a specific direction, $(\xi, \eta) \propto (\cos\theta, \sin\theta)$. Unfortunately, this does not directly allow us to estimate any sort of distribution function, because moose may leave hoofprints at various rates. It would be easier if every moose always took a step every 1 second, say; then we would be uniformly sampling every moose's position and direction. But if moose may tend to pause in a certain area and take more time between steps, then the number of hoofprints will not adequately indicate the increased probability of finding moose there. It may be possible, though, to combine the hoofprint information with the kinematic properties derived above, and obtain a useful characterization of the moose flux. While moose at the same location can move in different directions (crossing paths), suppose they tend to move mostly in a similar direction. This will result in the flux $(\alpha, \beta)$ also tending to point in that direction. In the limit in which moose at each point $(x, y)$ have a unique velocity $(\xi, \eta)$, the flux is $(\alpha, \beta) = (\rho\xi, \rho\eta)$; upon relaxing this condition, we can still use the typical direction of $(\xi, \eta)$ around $(x, y)$ as an approximation of the direction of $(\alpha, \beta)$. In addition, although the sampling of hoofprints is nonuniform as noted above, it is still true that areas with no moose ($\rho = 0$) will have no hoofprints as well as zero flux. Thus, it is useful to include the number of nearby hoofprints in scaling an approximation of $(\alpha, \beta)$. So, we can consider modeling the flux as $$\bigl(\alpha(x, y), \beta(x, y)\bigr) = q(x, y) \sum_i (\cos\theta_i, \sin\theta_i)\, K(x - x_i, y - y_i),$$ i.e., a scalar function $q$ (which accommodates unknown factors such as local moose speed) times a kernel density estimate of the typical local direction. The kernel will tend to generate the noted correlation of flux with local hoofprint density, enabling the function $q$ to be smoother. The choice of kernel shape and width is an empirical matter. For the one unknown function $q(x, y)$, we require one partial differential equation, which was derived above: $$\frac{\partial\alpha(x, y)}{\partial x} + \frac{\partial\beta(x, y)}{\partial y} = 0.$$ That is, we could try using the continuity equation to solve numerically for the unknown magnitude of the flux at each point. Further investigation would be needed regarding boundary conditions and well-posedness. This approach is somewhat similar to how the question proposes treating the local direction $(\cos\theta, \sin\theta)$ as the result of normalizing an underlying vector field, but there is no reason that the underlying field needs to be the gradient of a scalar.
The moose must flow, but how? Let's first propose a fairly general description of the underlying moose motion, and then consider how the hoofprint observations can help infer the specific dynamics. We start with an assumption that
7,221
The moose must flow, but how?
One thing you could consider would be a discrete model. The idea is this: using your measurement points, make a Voronoi diagram to divide your polygon up into cells. The direction measurement gives a directed graph on the cells, as it points to a unique adjacent cell, and you could consider moose to travel deterministically as prescribed by this graph. (Or, if you want to be a little more realistic, you could consider some distribution on directions whose mode is the measured direction, and consider moose to travel stochastically according to the resulting weighted graph.) One nice thing about this formulation is that it enforces your "no sinks, no sources" property: since each cell has exactly one other cell that it "points" to, your moose will never get stuck with no directions to go (a sink) and will never suffer from choice paralysis (a source). Your poor memoryless moose might get stuck wandering in a cycle forever, though!
The moose must flow, but how?
One thing you could consider would be a discrete model. The idea is this: using your measurement points, make a Voronoi diagram to divide your polygon up into cells. The direction measurement gives a
The moose must flow, but how? One thing you could consider would be a discrete model. The idea is this: using your measurement points, make a Voronoi diagram to divide your polygon up into cells. The direction measurement gives a directed graph on the cells, as it points to a unique adjacent cell, and you could consider moose to travel deterministically as prescribed by this graph. (Or, if you want to be a little more realistic, you could consider some distribution on directions whose mode is the measured direction, and consider moose to travel stochastically according to the resulting weighted graph.) One nice thing about this formulation is that it enforces your "no sinks, no sources" property: since each cell has exactly one other cell that it "points" to, your moose will never get stuck with no directions to go (a sink) and will never suffer from choice paralysis (a source). Your poor memoryless moose might get stuck wandering in a cycle forever, though!
The moose must flow, but how? One thing you could consider would be a discrete model. The idea is this: using your measurement points, make a Voronoi diagram to divide your polygon up into cells. The direction measurement gives a
7,222
Covariance of a random vector after a linear transformation
For a random (column) vector $\mathbf Z$ with mean vector $\mathbf{m} = E[\mathbf{Z}]$, the covariance matrix is defined as $\operatorname{cov}(\mathbf{Z}) = E[(\mathbf{Z}-\mathbf{m})(\mathbf{Z}-\mathbf{m})^T]$. Thus, the covariance matrix of $A\mathbf{Z}$, whose mean vector is $A\mathbf{m}$, is given by $$\begin{align}\operatorname{cov}(A\mathbf{Z}) &= E[(A\mathbf{Z}-A\mathbf{m})(A\mathbf{Z}-A\mathbf{m})^T]\\ &= E[A(\mathbf{Z}-\mathbf{m})(\mathbf{Z}-\mathbf{m})^TA^T]\\ &= AE[(\mathbf{Z}-\mathbf{m})(\mathbf{Z}-\mathbf{m})^T]A^T\\ &= A\operatorname{cov}(\mathbf{Z})A^T. \end{align}$$
Covariance of a random vector after a linear transformation
For a random (column) vector $\mathbf Z$ with mean vector $\mathbf{m} = E[\mathbf{Z}]$, the covariance matrix is defined as $\operatorname{cov}(\mathbf{Z}) = E[(\mathbf{Z}-\mathbf{m})(\mathbf{Z}-\math
Covariance of a random vector after a linear transformation For a random (column) vector $\mathbf Z$ with mean vector $\mathbf{m} = E[\mathbf{Z}]$, the covariance matrix is defined as $\operatorname{cov}(\mathbf{Z}) = E[(\mathbf{Z}-\mathbf{m})(\mathbf{Z}-\mathbf{m})^T]$. Thus, the covariance matrix of $A\mathbf{Z}$, whose mean vector is $A\mathbf{m}$, is given by $$\begin{align}\operatorname{cov}(A\mathbf{Z}) &= E[(A\mathbf{Z}-A\mathbf{m})(A\mathbf{Z}-A\mathbf{m})^T]\\ &= E[A(\mathbf{Z}-\mathbf{m})(\mathbf{Z}-\mathbf{m})^TA^T]\\ &= AE[(\mathbf{Z}-\mathbf{m})(\mathbf{Z}-\mathbf{m})^T]A^T\\ &= A\operatorname{cov}(\mathbf{Z})A^T. \end{align}$$
Covariance of a random vector after a linear transformation For a random (column) vector $\mathbf Z$ with mean vector $\mathbf{m} = E[\mathbf{Z}]$, the covariance matrix is defined as $\operatorname{cov}(\mathbf{Z}) = E[(\mathbf{Z}-\mathbf{m})(\mathbf{Z}-\math
7,223
How to Handle Many Times Series Simultaneously?
As Ben mentioned, the text book methods for multiple time series are VAR and VARIMA models. In practice though, I have not seen them used that often in the context of demand forecasting. Much more common, including what my team currently uses, is hierarchical forecasting (see here as well). Hierarchical forecasting is used whenever we have groups of similar time series: Sales history for groups of similar or related products, tourist data for cities grouped by geographical region, etc... The idea is to have a hierarchical listing of your different products and then do forecasting both at the base level (i.e. for each individual time series) and at aggregate levels defined by your product hierarchy (See attached graphic). You then reconcile the forecasts at the different levels (using Top Down, Botton Up, Optimal Reconciliation, etc...) depending on the business objectives and the desired forecasting targets. Note that you won't be fitting one large multivariate model in this case, but multiple models at different nodes in your hierarchy, which are then reconciled using your chosen reconciliation method. The advantage of this approach is that by grouping similar time series together, you can take advantage of the correlations and similarities between them to find patterns (such a seasonal variations) that might be difficult to spot with a single time series. Since you will be generating a large number of forecasts that is impossible to tune manually, you will need to automate your time series forecasting procedure, but that is not too difficult - see here for details. A more advanced, but similar in spirit, approach is used by Amazon and Uber, where one large RNN/LSTM Neural Network is trained on all of the time series at one. It is similar in spirit to hierarchical forecasting because it also tries to learn patterns from similarities and correlations between related time series. It is different from hierarchical forecasting because it tries to learn the relationships between the time series itself, as opposed to have this relationship predetermined and fixed prior to doing the forecasting. In this case, you no longer have to deal with automated forecast generating, since you are tuning only one model, but since the model is a very complex one, the tuning procedure is no longer a simple AIC/BIC minimization task, and you need to look at more advanced hyper-parameter tuning procedures, such as Bayesian Optimization. See this response (and comments) for additional details. For Python packages, PyAF is available but nor very popular. Most people use the HTS package in R, for which there is a lot more community support. For LSTM based approaches, there is Amazon's DeepAR and MQRNN models which are part of a service you have to pay for. Several people have also implemented LSTM for demand forecasting using Keras, you can look those up.
How to Handle Many Times Series Simultaneously?
As Ben mentioned, the text book methods for multiple time series are VAR and VARIMA models. In practice though, I have not seen them used that often in the context of demand forecasting. Much more co
How to Handle Many Times Series Simultaneously? As Ben mentioned, the text book methods for multiple time series are VAR and VARIMA models. In practice though, I have not seen them used that often in the context of demand forecasting. Much more common, including what my team currently uses, is hierarchical forecasting (see here as well). Hierarchical forecasting is used whenever we have groups of similar time series: Sales history for groups of similar or related products, tourist data for cities grouped by geographical region, etc... The idea is to have a hierarchical listing of your different products and then do forecasting both at the base level (i.e. for each individual time series) and at aggregate levels defined by your product hierarchy (See attached graphic). You then reconcile the forecasts at the different levels (using Top Down, Botton Up, Optimal Reconciliation, etc...) depending on the business objectives and the desired forecasting targets. Note that you won't be fitting one large multivariate model in this case, but multiple models at different nodes in your hierarchy, which are then reconciled using your chosen reconciliation method. The advantage of this approach is that by grouping similar time series together, you can take advantage of the correlations and similarities between them to find patterns (such a seasonal variations) that might be difficult to spot with a single time series. Since you will be generating a large number of forecasts that is impossible to tune manually, you will need to automate your time series forecasting procedure, but that is not too difficult - see here for details. A more advanced, but similar in spirit, approach is used by Amazon and Uber, where one large RNN/LSTM Neural Network is trained on all of the time series at one. It is similar in spirit to hierarchical forecasting because it also tries to learn patterns from similarities and correlations between related time series. It is different from hierarchical forecasting because it tries to learn the relationships between the time series itself, as opposed to have this relationship predetermined and fixed prior to doing the forecasting. In this case, you no longer have to deal with automated forecast generating, since you are tuning only one model, but since the model is a very complex one, the tuning procedure is no longer a simple AIC/BIC minimization task, and you need to look at more advanced hyper-parameter tuning procedures, such as Bayesian Optimization. See this response (and comments) for additional details. For Python packages, PyAF is available but nor very popular. Most people use the HTS package in R, for which there is a lot more community support. For LSTM based approaches, there is Amazon's DeepAR and MQRNN models which are part of a service you have to pay for. Several people have also implemented LSTM for demand forecasting using Keras, you can look those up.
How to Handle Many Times Series Simultaneously? As Ben mentioned, the text book methods for multiple time series are VAR and VARIMA models. In practice though, I have not seen them used that often in the context of demand forecasting. Much more co
7,224
How to Handle Many Times Series Simultaneously?
Generally when you have multiple time-series you would use some kind of vector-based model to model them all simultaneously. The natural extension of the ARIMA model for this purpose is the VARIMA (Vector ARIMA) model. The fact that you have $1200$ time-series means that you will need to specify some heavy parametric restrictions on the cross-correlation terms in the model, since you will not be able to deal with free parameters for every pair of time-series variables. I would suggest starting with some simple vector-based model (e.g., VAR, VMA, VARMA) with low degree, and some simple parameter restrictions for cross-correlation. See if you can find a reasonable model that incorporates cross-correlation to at least one degree of lag, and then go from there. This exercise will require reading up on vector-based time-series models. The MTS package and the bigtime pacakage in R has some capabilities for dealing with multivariate time-series, so it would also be worth familiarising yourself with these packages.
How to Handle Many Times Series Simultaneously?
Generally when you have multiple time-series you would use some kind of vector-based model to model them all simultaneously. The natural extension of the ARIMA model for this purpose is the VARIMA (V
How to Handle Many Times Series Simultaneously? Generally when you have multiple time-series you would use some kind of vector-based model to model them all simultaneously. The natural extension of the ARIMA model for this purpose is the VARIMA (Vector ARIMA) model. The fact that you have $1200$ time-series means that you will need to specify some heavy parametric restrictions on the cross-correlation terms in the model, since you will not be able to deal with free parameters for every pair of time-series variables. I would suggest starting with some simple vector-based model (e.g., VAR, VMA, VARMA) with low degree, and some simple parameter restrictions for cross-correlation. See if you can find a reasonable model that incorporates cross-correlation to at least one degree of lag, and then go from there. This exercise will require reading up on vector-based time-series models. The MTS package and the bigtime pacakage in R has some capabilities for dealing with multivariate time-series, so it would also be worth familiarising yourself with these packages.
How to Handle Many Times Series Simultaneously? Generally when you have multiple time-series you would use some kind of vector-based model to model them all simultaneously. The natural extension of the ARIMA model for this purpose is the VARIMA (V
7,225
How to Handle Many Times Series Simultaneously?
The problem with the mass-fitting packages that have been suggested is they uniformly fail to deal with latent deterministic structure such as pulses, level/step shifts, seasonal pulses and time trends or efficiently deal with user-suggested causals as per https://autobox.com/pdfs/SARMAX.pdf Additionally the compute time can be a serious complication. AUTOBOX ( which I helped to develop) has a very sophisticated model building phase which archives models and a very quick forecasting option that reuses previously developed model reducing the forecasting time to a small fraction of the rigorous model development time while adjusting the new forecast for recent data observed after the model had been developed and stored. This was implemented for Annheuser-Busch's 600,000 store forecast project for some 50+ items taking into account Price and Weather . Models can be updated in a rolling fashion , replacing prior models as needed. No need for parametric restrictions OR omitting the simultaneous effect of causal variables as in VAR and VARIMA while solely relying on only the past of all series a la ARIMA . There is no need to have just 1 model with 1 set of parameters as models can and should be tailored/optimized to the individual series. Unfortunately there is no Python solution yet but hope springs eternal.
How to Handle Many Times Series Simultaneously?
The problem with the mass-fitting packages that have been suggested is they uniformly fail to deal with latent deterministic structure such as pulses, level/step shifts, seasonal pulses and time trend
How to Handle Many Times Series Simultaneously? The problem with the mass-fitting packages that have been suggested is they uniformly fail to deal with latent deterministic structure such as pulses, level/step shifts, seasonal pulses and time trends or efficiently deal with user-suggested causals as per https://autobox.com/pdfs/SARMAX.pdf Additionally the compute time can be a serious complication. AUTOBOX ( which I helped to develop) has a very sophisticated model building phase which archives models and a very quick forecasting option that reuses previously developed model reducing the forecasting time to a small fraction of the rigorous model development time while adjusting the new forecast for recent data observed after the model had been developed and stored. This was implemented for Annheuser-Busch's 600,000 store forecast project for some 50+ items taking into account Price and Weather . Models can be updated in a rolling fashion , replacing prior models as needed. No need for parametric restrictions OR omitting the simultaneous effect of causal variables as in VAR and VARIMA while solely relying on only the past of all series a la ARIMA . There is no need to have just 1 model with 1 set of parameters as models can and should be tailored/optimized to the individual series. Unfortunately there is no Python solution yet but hope springs eternal.
How to Handle Many Times Series Simultaneously? The problem with the mass-fitting packages that have been suggested is they uniformly fail to deal with latent deterministic structure such as pulses, level/step shifts, seasonal pulses and time trend
7,226
How to Handle Many Times Series Simultaneously?
1200 products is the main driver of the dimensionality of your problem. Now you have only 25 periods. This is very little data, insufficient to do any kind of blanket correlation analysis. In other words you don't have data to have a simultaneous forecast of all products without reducing the dimensionality. This pretty much eliminates all VARMA and other nice theoretical models. It's impossible to deal with the coefficients of these models, there's too many of them to estimate. Consider a simple correlation analysis. You'd need (1200x1200 + 1200)/2 cells in the covariance/correlation matrix. You have only 25 data points. The matrix will be rank defficient to enormous degree. What are you going to do? Broadly you have two simple approaches: separate forecasts and factor model. The first approach is obvious: you run each product independently. The variation is to group them by some feature, e.g. sector such as "mens closing". The second approach is to represent the product demand as $d_i=\sum_jF_{j}\beta_{ji}+e_i$, where $F_j$ is a factor. What are the factors? These could be exogenous factors such as GDP growth rate. Or they could be exogenous factors , e.g. those you obtained with PCA analysis. If it's an exogenous factor, then you'd need to obtain betas by regressing the series on these factors individually. For PCA, you could do a robust PCA and get first few factors with their weights which are you betas. Next, you analyze the factors, and build a forecasting model to produce $\hat F_j$ and plug them back to your model to obtain forecast of product demand. You could run a time series model for each factor, even a vector model such as VARMA for several factors. Now, that the dimensionality of the problem was reduced, ou may have enough data to build time series forecasting.
How to Handle Many Times Series Simultaneously?
1200 products is the main driver of the dimensionality of your problem. Now you have only 25 periods. This is very little data, insufficient to do any kind of blanket correlation analysis. In other wo
How to Handle Many Times Series Simultaneously? 1200 products is the main driver of the dimensionality of your problem. Now you have only 25 periods. This is very little data, insufficient to do any kind of blanket correlation analysis. In other words you don't have data to have a simultaneous forecast of all products without reducing the dimensionality. This pretty much eliminates all VARMA and other nice theoretical models. It's impossible to deal with the coefficients of these models, there's too many of them to estimate. Consider a simple correlation analysis. You'd need (1200x1200 + 1200)/2 cells in the covariance/correlation matrix. You have only 25 data points. The matrix will be rank defficient to enormous degree. What are you going to do? Broadly you have two simple approaches: separate forecasts and factor model. The first approach is obvious: you run each product independently. The variation is to group them by some feature, e.g. sector such as "mens closing". The second approach is to represent the product demand as $d_i=\sum_jF_{j}\beta_{ji}+e_i$, where $F_j$ is a factor. What are the factors? These could be exogenous factors such as GDP growth rate. Or they could be exogenous factors , e.g. those you obtained with PCA analysis. If it's an exogenous factor, then you'd need to obtain betas by regressing the series on these factors individually. For PCA, you could do a robust PCA and get first few factors with their weights which are you betas. Next, you analyze the factors, and build a forecasting model to produce $\hat F_j$ and plug them back to your model to obtain forecast of product demand. You could run a time series model for each factor, even a vector model such as VARMA for several factors. Now, that the dimensionality of the problem was reduced, ou may have enough data to build time series forecasting.
How to Handle Many Times Series Simultaneously? 1200 products is the main driver of the dimensionality of your problem. Now you have only 25 periods. This is very little data, insufficient to do any kind of blanket correlation analysis. In other wo
7,227
How to Handle Many Times Series Simultaneously?
I am not sure if you are interested in cloud-based solutions, but Amazon makes an algorithm they call "DeepAR" available through AWS SageMaker, as seen here. This algorithm is specifically intended to be able to learn from multiple input time series in order to create forecasts, including static and dynamic features; as seen in this excerpt from the above linked page: The training input for the DeepAR algorithm is one or, preferably, more target time series that have been generated by the same process or similar processes. Based on this input dataset, the algorithm trains a model that learns an approximation of this process/processes and uses it to predict how the target time series evolves. Each target time series can be optionally associated with a vector of static (time-independent) categorical features provided by the cat field and a vector of dynamic (time-dependent) time series provided by the dynamic_feat field. Unfortunately, as far as I can tell, they do not make this algorithm available for offline/self-hosted use.
How to Handle Many Times Series Simultaneously?
I am not sure if you are interested in cloud-based solutions, but Amazon makes an algorithm they call "DeepAR" available through AWS SageMaker, as seen here. This algorithm is specifically intended to
How to Handle Many Times Series Simultaneously? I am not sure if you are interested in cloud-based solutions, but Amazon makes an algorithm they call "DeepAR" available through AWS SageMaker, as seen here. This algorithm is specifically intended to be able to learn from multiple input time series in order to create forecasts, including static and dynamic features; as seen in this excerpt from the above linked page: The training input for the DeepAR algorithm is one or, preferably, more target time series that have been generated by the same process or similar processes. Based on this input dataset, the algorithm trains a model that learns an approximation of this process/processes and uses it to predict how the target time series evolves. Each target time series can be optionally associated with a vector of static (time-independent) categorical features provided by the cat field and a vector of dynamic (time-dependent) time series provided by the dynamic_feat field. Unfortunately, as far as I can tell, they do not make this algorithm available for offline/self-hosted use.
How to Handle Many Times Series Simultaneously? I am not sure if you are interested in cloud-based solutions, but Amazon makes an algorithm they call "DeepAR" available through AWS SageMaker, as seen here. This algorithm is specifically intended to
7,228
What are the advantages of kernel PCA over standard PCA?
PCA (as a dimensionality reduction technique) tries to find a low-dimensional linear subspace that the data are confined to. But it might be that the data are confined to low-dimensional nonlinear subspace. What will happen then? Take a look at this Figure, taken from Bishop's "Pattern recognition and Machine Learning" textbook (Figure 12.16): The data points here (on the left) are located mostly along a curve in 2D. PCA cannot reduce the dimensionality from two to one, because the points are not located along a straight line. But still, the data are "obviously" located around a one-dimensional non-linear curve. So while PCA fails, there must be another way! And indeed, kernel PCA can find this non-linear manifold and discover that the data are in fact nearly one-dimensional. It does so by mapping the data into a higher-dimensional space. This can indeed look like a contradiction (your question #2), but it is not. The data are mapped into a higher-dimensional space, but then turn out to lie on a lower dimensional subspace of it. So you increase the dimensionality in order to be able to decrease it. The essence of the "kernel trick" is that one does not actually need to explicitly consider the higher-dimensional space, so this potentially confusing leap in dimensionality is performed entirely undercover. The idea, however, stays the same.
What are the advantages of kernel PCA over standard PCA?
PCA (as a dimensionality reduction technique) tries to find a low-dimensional linear subspace that the data are confined to. But it might be that the data are confined to low-dimensional nonlinear sub
What are the advantages of kernel PCA over standard PCA? PCA (as a dimensionality reduction technique) tries to find a low-dimensional linear subspace that the data are confined to. But it might be that the data are confined to low-dimensional nonlinear subspace. What will happen then? Take a look at this Figure, taken from Bishop's "Pattern recognition and Machine Learning" textbook (Figure 12.16): The data points here (on the left) are located mostly along a curve in 2D. PCA cannot reduce the dimensionality from two to one, because the points are not located along a straight line. But still, the data are "obviously" located around a one-dimensional non-linear curve. So while PCA fails, there must be another way! And indeed, kernel PCA can find this non-linear manifold and discover that the data are in fact nearly one-dimensional. It does so by mapping the data into a higher-dimensional space. This can indeed look like a contradiction (your question #2), but it is not. The data are mapped into a higher-dimensional space, but then turn out to lie on a lower dimensional subspace of it. So you increase the dimensionality in order to be able to decrease it. The essence of the "kernel trick" is that one does not actually need to explicitly consider the higher-dimensional space, so this potentially confusing leap in dimensionality is performed entirely undercover. The idea, however, stays the same.
What are the advantages of kernel PCA over standard PCA? PCA (as a dimensionality reduction technique) tries to find a low-dimensional linear subspace that the data are confined to. But it might be that the data are confined to low-dimensional nonlinear sub
7,229
Is p-value essentially useless and dangerous to use?
Here are some thoughts: As @whuber notes, I doubt Gelman said that (although he may have said something similar sounding). Five percent of cases where the null is true will yield significant results (type I errors) using an alpha of .05. If we assume that the true power for all studies where the null was false were $80\%$, the statement could only be true if the ratio of studies undertaken where the null was true to studies in which the null was false was $100/118.75 \approx 84\%$. Model selection criteria, such as the AIC, can be seen as a way of selecting an appropriate $p$-value. To understand this more fully, it may help to read @Glen_b's answer here: Stepwise regression in R – Critical p-value. Moreover, nothing prevents people from 'AIC-hacking', if the AIC became the requirement for publication. A good guide to fitting models in such a manner that you don't invalidate your $p$-values would be Frank Harrell's book, Regression Modeling Strategies. I am not dogmatically opposed to using Bayesian methods, but I do not believe they would solve this problem. For example, you can just keep collecting data until the credible interval no longer included whatever value you wanted to reject. Thus you have 'credible interval-hacking'. As I see it, the issue is that many practitioners are not intrinsically interested in the statistical analyses they use, so they will use whichever method is required of them in an unthinking and mechanical way. For more on my perspective here, it may help to read my answer to: Effect size as the hypothesis for significance testing.
Is p-value essentially useless and dangerous to use?
Here are some thoughts: As @whuber notes, I doubt Gelman said that (although he may have said something similar sounding). Five percent of cases where the null is true will yield significant resul
Is p-value essentially useless and dangerous to use? Here are some thoughts: As @whuber notes, I doubt Gelman said that (although he may have said something similar sounding). Five percent of cases where the null is true will yield significant results (type I errors) using an alpha of .05. If we assume that the true power for all studies where the null was false were $80\%$, the statement could only be true if the ratio of studies undertaken where the null was true to studies in which the null was false was $100/118.75 \approx 84\%$. Model selection criteria, such as the AIC, can be seen as a way of selecting an appropriate $p$-value. To understand this more fully, it may help to read @Glen_b's answer here: Stepwise regression in R – Critical p-value. Moreover, nothing prevents people from 'AIC-hacking', if the AIC became the requirement for publication. A good guide to fitting models in such a manner that you don't invalidate your $p$-values would be Frank Harrell's book, Regression Modeling Strategies. I am not dogmatically opposed to using Bayesian methods, but I do not believe they would solve this problem. For example, you can just keep collecting data until the credible interval no longer included whatever value you wanted to reject. Thus you have 'credible interval-hacking'. As I see it, the issue is that many practitioners are not intrinsically interested in the statistical analyses they use, so they will use whichever method is required of them in an unthinking and mechanical way. For more on my perspective here, it may help to read my answer to: Effect size as the hypothesis for significance testing.
Is p-value essentially useless and dangerous to use? Here are some thoughts: As @whuber notes, I doubt Gelman said that (although he may have said something similar sounding). Five percent of cases where the null is true will yield significant resul
7,230
Is p-value essentially useless and dangerous to use?
To me, one of the most interesting things about the p-hacking controversy is that the entire history of p<=0.05 as the "once in a blue moon" standard for statistical significance, as Joseph Kaldane noted in a JASA article on forensic statistics back in the 90s, rests on absolutely no statistical theory whatsoever. It's a convention, simple heuristic and rule of thumb that started with R.A. Fisher and has since been reified or consecrated into its present "unquestioned" status. Bayesian or not, the time is long overdue to challenge this metric standard or at least give it the skepticism it deserves. That said, my interpretation of Gelman's point is that, as is well known, the peer review process rewards positive statistical significance and punishes insignificant results by not publishing those papers. This is irrespective of whether or not publishing an insignificant finding would have potentially large impact on the thinking and theorizing for a given domain. Gelman, Simonshohn and others have repeatedly pointed to the abuse of the 0.05 significance level in peer-reviewed and published research by holding up examples of ridiculous, yet statistically significant findings in paranormal, social and psychological research. One of the most egregious was the statistically significant finding that pregnant women were more likely to wear red dresses. Gelman maintains that, in the absence of logical challenges to statistical results, the mere fact that an analysis is "statistically significant" is a potentially meaningless explanation. Here, he's referring to the industry's occupational hazard with overly technical and abstruse arguments that do little or nothing to advance a debate among a lay audience. This is a point Gary King makes vehemently when he practically begs quantitative political scientists (and, by extension, all quants) to stop mechanistic, technical reportage such as "this result was significant at a p<=0.05 level" and moving towards more substantive interpretations. Here's a quote from a paper by him, (1) convey numerically precise estimates of the quantities of greatest substantive interest, (2) include reasonable measures of uncertainty about those estimates, and (3) require little specialized knowledge to understand. The following simple statement satisfies our criteria: 'Other things being equal, an additional year of education would increase your annual income by 1,500 dollars on average, plus or minus about 500 dollars.' Any smart high school student would understand that sentence, no matter how sophisticated the statistical model and powerful the computers used to produce it. King's point is very well taken and maps out the direction the debate needs to take. Making the Most of Statistical Analyses: Improving Interpretation and Presentation, King, Tomz and Wittenberg, 2002, Am Jour of Poli Sci.
Is p-value essentially useless and dangerous to use?
To me, one of the most interesting things about the p-hacking controversy is that the entire history of p<=0.05 as the "once in a blue moon" standard for statistical significance, as Joseph Kaldane no
Is p-value essentially useless and dangerous to use? To me, one of the most interesting things about the p-hacking controversy is that the entire history of p<=0.05 as the "once in a blue moon" standard for statistical significance, as Joseph Kaldane noted in a JASA article on forensic statistics back in the 90s, rests on absolutely no statistical theory whatsoever. It's a convention, simple heuristic and rule of thumb that started with R.A. Fisher and has since been reified or consecrated into its present "unquestioned" status. Bayesian or not, the time is long overdue to challenge this metric standard or at least give it the skepticism it deserves. That said, my interpretation of Gelman's point is that, as is well known, the peer review process rewards positive statistical significance and punishes insignificant results by not publishing those papers. This is irrespective of whether or not publishing an insignificant finding would have potentially large impact on the thinking and theorizing for a given domain. Gelman, Simonshohn and others have repeatedly pointed to the abuse of the 0.05 significance level in peer-reviewed and published research by holding up examples of ridiculous, yet statistically significant findings in paranormal, social and psychological research. One of the most egregious was the statistically significant finding that pregnant women were more likely to wear red dresses. Gelman maintains that, in the absence of logical challenges to statistical results, the mere fact that an analysis is "statistically significant" is a potentially meaningless explanation. Here, he's referring to the industry's occupational hazard with overly technical and abstruse arguments that do little or nothing to advance a debate among a lay audience. This is a point Gary King makes vehemently when he practically begs quantitative political scientists (and, by extension, all quants) to stop mechanistic, technical reportage such as "this result was significant at a p<=0.05 level" and moving towards more substantive interpretations. Here's a quote from a paper by him, (1) convey numerically precise estimates of the quantities of greatest substantive interest, (2) include reasonable measures of uncertainty about those estimates, and (3) require little specialized knowledge to understand. The following simple statement satisfies our criteria: 'Other things being equal, an additional year of education would increase your annual income by 1,500 dollars on average, plus or minus about 500 dollars.' Any smart high school student would understand that sentence, no matter how sophisticated the statistical model and powerful the computers used to produce it. King's point is very well taken and maps out the direction the debate needs to take. Making the Most of Statistical Analyses: Improving Interpretation and Presentation, King, Tomz and Wittenberg, 2002, Am Jour of Poli Sci.
Is p-value essentially useless and dangerous to use? To me, one of the most interesting things about the p-hacking controversy is that the entire history of p<=0.05 as the "once in a blue moon" standard for statistical significance, as Joseph Kaldane no
7,231
Is p-value essentially useless and dangerous to use?
Here are some of my thoughts regarding Question 3 after reading all the insightful comments and answers. Perhaps one practical guidance in statistical analysis to avoid p-value hacking is to instead look at the scientifically (or, biologically, clinically, etc) significant/meaningful effect size. Specifically, the research should pre-define the effect size that can be declared useful or meaningful before the data analysis or even before the data collection. For example, if let $\theta$ denote a drug effect, instead of testing the following hypothesis, $$H_0: \theta = 0 \quad \quad vs. \quad \quad H_a: \theta \neq 0,$$ one should always test $$H_0: \theta < \delta \quad \quad vs. \quad \quad H_a: \theta \ge \delta,$$ with $\delta$ being the predefined effect size to claim meaningful significance. In addition, to avoid of using too large sample size to detect the effect, the sample size required should be taken into account as well. That is, we should put a constrain on the maximum sample size used for the experiment. To sum up, We need predefine a threshold for the meaningful effect size to declare significance; We need to predefine a threshold for sample size used in the experiment to quantify how detectable the meaningful effect size is; With above, maybe we can therefore avoid minor "significant" effect claimed by a huge sample size. [Update 6/9/2015] Regarding Question 3, here are some suggestions based on the recent paper from nature: "The fickle P value generates irreproducible results" as I mentioned in the Question part. Report effect size estimates and their precision, i.e. 95% confidence interval, since those more informative information answer exactly questions like how big is the difference, or how strong is the relationship or association; Put the effect size estimates and 95% CIs into the context of the specific scientific studies/questions and focus on their relevance of answering those questions and discount the fickle P value; Replace the power analysis with "planning for precision" to determine the sample size required for estimating the effect size to reach a defined degree of precision. [End update 6/9/2015]
Is p-value essentially useless and dangerous to use?
Here are some of my thoughts regarding Question 3 after reading all the insightful comments and answers. Perhaps one practical guidance in statistical analysis to avoid p-value hacking is to instead
Is p-value essentially useless and dangerous to use? Here are some of my thoughts regarding Question 3 after reading all the insightful comments and answers. Perhaps one practical guidance in statistical analysis to avoid p-value hacking is to instead look at the scientifically (or, biologically, clinically, etc) significant/meaningful effect size. Specifically, the research should pre-define the effect size that can be declared useful or meaningful before the data analysis or even before the data collection. For example, if let $\theta$ denote a drug effect, instead of testing the following hypothesis, $$H_0: \theta = 0 \quad \quad vs. \quad \quad H_a: \theta \neq 0,$$ one should always test $$H_0: \theta < \delta \quad \quad vs. \quad \quad H_a: \theta \ge \delta,$$ with $\delta$ being the predefined effect size to claim meaningful significance. In addition, to avoid of using too large sample size to detect the effect, the sample size required should be taken into account as well. That is, we should put a constrain on the maximum sample size used for the experiment. To sum up, We need predefine a threshold for the meaningful effect size to declare significance; We need to predefine a threshold for sample size used in the experiment to quantify how detectable the meaningful effect size is; With above, maybe we can therefore avoid minor "significant" effect claimed by a huge sample size. [Update 6/9/2015] Regarding Question 3, here are some suggestions based on the recent paper from nature: "The fickle P value generates irreproducible results" as I mentioned in the Question part. Report effect size estimates and their precision, i.e. 95% confidence interval, since those more informative information answer exactly questions like how big is the difference, or how strong is the relationship or association; Put the effect size estimates and 95% CIs into the context of the specific scientific studies/questions and focus on their relevance of answering those questions and discount the fickle P value; Replace the power analysis with "planning for precision" to determine the sample size required for estimating the effect size to reach a defined degree of precision. [End update 6/9/2015]
Is p-value essentially useless and dangerous to use? Here are some of my thoughts regarding Question 3 after reading all the insightful comments and answers. Perhaps one practical guidance in statistical analysis to avoid p-value hacking is to instead
7,232
Is p-value essentially useless and dangerous to use?
In contemporary usage the p-value refers to the cumulative probability of the data given the null hypothesis being at or greater than some threshold. I.e. $P(D|H_0)\le\alpha$. I think that $H_0$ tends to be a hypothesis of 'no effect' usually proxied by a comparison to the probability to a satisfactorily unlikely random result in some number of trials. Dependent on the field it varies from 5% down to 0.1% or less. However, $H_0$ does not have to be a comparison to random. It implies that 1/20 results may reject the null when they should not have. If science based it's conclusion on single experiments then the statement would be defensible. Otherwise, if experiments were repeatable it would imply that 19/20 would not be rejected. The moral of the story is that experiments should be repeatable. Science is a tradition grounded in "objectivity" so "objective probability" naturally appeals. Recall that experiments are suppose to demonstrate a high degree of control often employing block design and randomisation to control for factors outside of study. Thus, comparison to random does make sense because all other factors are supposed to be controlled for except for the ones under study. These techniques were highly successful in agriculture and industry prior to being ported to science. I'm not sure if a lack of information was ever really the problem. It's notable that for many in the non-mathematical sciences that statistics is just a box to tick. I'd suggest a general read about decision theory which unites the two frameworks. It simply comes down to using as much information as you have. Frequentist statistics assume parameters in models have unknown values from fixed distributions. Bayesians assume parameters in models come from distributions conditioned by what we know. If there is enough information to form a prior and enough information to update it to an accurate posterior then that's great. If there isn't then you may end up with worse results.
Is p-value essentially useless and dangerous to use?
In contemporary usage the p-value refers to the cumulative probability of the data given the null hypothesis being at or greater than some threshold. I.e. $P(D|H_0)\le\alpha$. I think that $H_0$ tends
Is p-value essentially useless and dangerous to use? In contemporary usage the p-value refers to the cumulative probability of the data given the null hypothesis being at or greater than some threshold. I.e. $P(D|H_0)\le\alpha$. I think that $H_0$ tends to be a hypothesis of 'no effect' usually proxied by a comparison to the probability to a satisfactorily unlikely random result in some number of trials. Dependent on the field it varies from 5% down to 0.1% or less. However, $H_0$ does not have to be a comparison to random. It implies that 1/20 results may reject the null when they should not have. If science based it's conclusion on single experiments then the statement would be defensible. Otherwise, if experiments were repeatable it would imply that 19/20 would not be rejected. The moral of the story is that experiments should be repeatable. Science is a tradition grounded in "objectivity" so "objective probability" naturally appeals. Recall that experiments are suppose to demonstrate a high degree of control often employing block design and randomisation to control for factors outside of study. Thus, comparison to random does make sense because all other factors are supposed to be controlled for except for the ones under study. These techniques were highly successful in agriculture and industry prior to being ported to science. I'm not sure if a lack of information was ever really the problem. It's notable that for many in the non-mathematical sciences that statistics is just a box to tick. I'd suggest a general read about decision theory which unites the two frameworks. It simply comes down to using as much information as you have. Frequentist statistics assume parameters in models have unknown values from fixed distributions. Bayesians assume parameters in models come from distributions conditioned by what we know. If there is enough information to form a prior and enough information to update it to an accurate posterior then that's great. If there isn't then you may end up with worse results.
Is p-value essentially useless and dangerous to use? In contemporary usage the p-value refers to the cumulative probability of the data given the null hypothesis being at or greater than some threshold. I.e. $P(D|H_0)\le\alpha$. I think that $H_0$ tends
7,233
Is p-value essentially useless and dangerous to use?
Reproducibility of statistical test results This is a short, simple exercise to assess the reproducibility of decisions based on statistical testing. Consider a null hypothesis H0 with a set of alternative hypotheses containing H1 and H2. Setup the statistical hypothesis test procedure at a significance level of 0.05 to have a power of 0.8, if H1 is true. Further assume that the power for H2 is 0.5. To assess reproducibility of test result, the experiment is considered of executing the test procedure two times. Starting with the situation, where H0 is true, the probabilities for the outcomes of the joint experiment are displayed in Table 1. The probability of not being able to reproduce decisions is 0.095. Table 1. Frequencies, if H0 is true \begin{array} {|r|r|} \hline Frequency. of. decision &Reject. H0 &Retain. H0 \\ \hline Reject. H0 &0.0025 &0.0475 \\ \hline Retain. H0 &0.0475 &0.9025 \\ \hline \end{array} The frequencies change as the true state of nature changes. Assuming H1 is true, H0 can be rejected as designed with a power of 0.8. The resulting frequencies for the different outcomes of the joint experiment are displayed in Table 2. The probability of not being able to reproduce decisions is 0.32. Table 2. Frequencies, if H1 is true \begin{array} {|r|r|} \hline Frequency. of. decision &Reject. H0 &Retain. H0 \\ \hline Reject. H0 &0.64 &0.16 \\ \hline Retain. H0 &0.16 &0.04 \\ \hline \end{array} Assuming H2 is true, H0 will be rejected with a probability of 0.5. The resulting frequencies for the different outcomes of the joint experiment are displayed in Table 3. The probability of not being able to reproduce decisions is 0.5. Table 3. Frequencies, if H2 is true \begin{array} {|r|r|} \hline Frequency. of. decision &Reject. H0 &Retain. H0 \\ \hline Reject. H0 &0.25 &0.25 \\ \hline Retain. H0 &0.25 &0.25 \\ \hline \end{array} The test procedure was designed to control type I errors (the rejection of the null hypothesis even though it is true) with a probability of 0.05 and limit type II errors (no rejection of the null hypothesis even though it is wrong and H1 is true) to 0.2. For both cases, with either H0 or H1 assumed to be true, this leads to non-negligible frequencies, 0.095 and 0.32, respectively, of "non-reproducible", "contradictory" decisions, if the same experiment is repeated twice. The situation gets worse with a frequency up to 0.5 for "non-reproducible", "contradictory" decisions, if the true state of nature is between the null- and the alternative hypothesis used to design the experiment. The situation can also get better - if type 1 errors are controlled more strictly, or if the true state of nature is far away from the null, which results in a power to reject the null that is close to 1. Thus, if you want more reproducible decisions, increase the significance level and the power of your tests. Not very astonishing ...
Is p-value essentially useless and dangerous to use?
Reproducibility of statistical test results This is a short, simple exercise to assess the reproducibility of decisions based on statistical testing. Consider a null hypothesis H0 with a set of alter
Is p-value essentially useless and dangerous to use? Reproducibility of statistical test results This is a short, simple exercise to assess the reproducibility of decisions based on statistical testing. Consider a null hypothesis H0 with a set of alternative hypotheses containing H1 and H2. Setup the statistical hypothesis test procedure at a significance level of 0.05 to have a power of 0.8, if H1 is true. Further assume that the power for H2 is 0.5. To assess reproducibility of test result, the experiment is considered of executing the test procedure two times. Starting with the situation, where H0 is true, the probabilities for the outcomes of the joint experiment are displayed in Table 1. The probability of not being able to reproduce decisions is 0.095. Table 1. Frequencies, if H0 is true \begin{array} {|r|r|} \hline Frequency. of. decision &Reject. H0 &Retain. H0 \\ \hline Reject. H0 &0.0025 &0.0475 \\ \hline Retain. H0 &0.0475 &0.9025 \\ \hline \end{array} The frequencies change as the true state of nature changes. Assuming H1 is true, H0 can be rejected as designed with a power of 0.8. The resulting frequencies for the different outcomes of the joint experiment are displayed in Table 2. The probability of not being able to reproduce decisions is 0.32. Table 2. Frequencies, if H1 is true \begin{array} {|r|r|} \hline Frequency. of. decision &Reject. H0 &Retain. H0 \\ \hline Reject. H0 &0.64 &0.16 \\ \hline Retain. H0 &0.16 &0.04 \\ \hline \end{array} Assuming H2 is true, H0 will be rejected with a probability of 0.5. The resulting frequencies for the different outcomes of the joint experiment are displayed in Table 3. The probability of not being able to reproduce decisions is 0.5. Table 3. Frequencies, if H2 is true \begin{array} {|r|r|} \hline Frequency. of. decision &Reject. H0 &Retain. H0 \\ \hline Reject. H0 &0.25 &0.25 \\ \hline Retain. H0 &0.25 &0.25 \\ \hline \end{array} The test procedure was designed to control type I errors (the rejection of the null hypothesis even though it is true) with a probability of 0.05 and limit type II errors (no rejection of the null hypothesis even though it is wrong and H1 is true) to 0.2. For both cases, with either H0 or H1 assumed to be true, this leads to non-negligible frequencies, 0.095 and 0.32, respectively, of "non-reproducible", "contradictory" decisions, if the same experiment is repeated twice. The situation gets worse with a frequency up to 0.5 for "non-reproducible", "contradictory" decisions, if the true state of nature is between the null- and the alternative hypothesis used to design the experiment. The situation can also get better - if type 1 errors are controlled more strictly, or if the true state of nature is far away from the null, which results in a power to reject the null that is close to 1. Thus, if you want more reproducible decisions, increase the significance level and the power of your tests. Not very astonishing ...
Is p-value essentially useless and dangerous to use? Reproducibility of statistical test results This is a short, simple exercise to assess the reproducibility of decisions based on statistical testing. Consider a null hypothesis H0 with a set of alter
7,234
Backpropagation vs Genetic Algorithm for Neural Network training
If you look carefully at the scientific literature you'll find contrasting results. Obviously, in some cases GA (and more in general, Evolutionary Algorithms) may help you to find an optimal NN design but normally they have so many drawbacks (algorithm parameters' tuning, computational complexity etc) and their use is not feasible for real-world applications. Of course you can find a set of problems where GA/EAs is always better than backpropagation. Given that finding an optimal NN design is a complex multimodal optimization problem GA/EAs may help (as metaheuristics) to improve the results obtained with "traditional" algorithms, e.g. using GA/EAs to find only the initial weights configuration or helping traditional algorithms to escape from local minima (if you are interested I wrote a paper about this topic). I worked a lot on this field and I can tell you that there are many scientific works on GA/EAs applied to NNs because they are (or better, they used to be) an emerging research field.
Backpropagation vs Genetic Algorithm for Neural Network training
If you look carefully at the scientific literature you'll find contrasting results. Obviously, in some cases GA (and more in general, Evolutionary Algorithms) may help you to find an optimal NN design
Backpropagation vs Genetic Algorithm for Neural Network training If you look carefully at the scientific literature you'll find contrasting results. Obviously, in some cases GA (and more in general, Evolutionary Algorithms) may help you to find an optimal NN design but normally they have so many drawbacks (algorithm parameters' tuning, computational complexity etc) and their use is not feasible for real-world applications. Of course you can find a set of problems where GA/EAs is always better than backpropagation. Given that finding an optimal NN design is a complex multimodal optimization problem GA/EAs may help (as metaheuristics) to improve the results obtained with "traditional" algorithms, e.g. using GA/EAs to find only the initial weights configuration or helping traditional algorithms to escape from local minima (if you are interested I wrote a paper about this topic). I worked a lot on this field and I can tell you that there are many scientific works on GA/EAs applied to NNs because they are (or better, they used to be) an emerging research field.
Backpropagation vs Genetic Algorithm for Neural Network training If you look carefully at the scientific literature you'll find contrasting results. Obviously, in some cases GA (and more in general, Evolutionary Algorithms) may help you to find an optimal NN design
7,235
Backpropagation vs Genetic Algorithm for Neural Network training
One of the key problems with neural networks is over-fitting, which means that algorithms that try very hard to find a network that minimises some criterion based on a finite sample of data will end up with a network that works very well for that particular sample of data, but which will have poor generalisation. I am rather wary of using GAs to design neural networks for this reason, especially if they do architecture optimisation at the same time as optimising the weights. I have generally found that training networks (with regularisation) from a number (say 20) of random initial weight vectors and then forming an ensemble of all the resulting networks is generally as good an approach as any. Essentially optimisation is the root of all evil in machine learning, the more of it you do, the more likely you are to end up over-fitting the data.
Backpropagation vs Genetic Algorithm for Neural Network training
One of the key problems with neural networks is over-fitting, which means that algorithms that try very hard to find a network that minimises some criterion based on a finite sample of data will end u
Backpropagation vs Genetic Algorithm for Neural Network training One of the key problems with neural networks is over-fitting, which means that algorithms that try very hard to find a network that minimises some criterion based on a finite sample of data will end up with a network that works very well for that particular sample of data, but which will have poor generalisation. I am rather wary of using GAs to design neural networks for this reason, especially if they do architecture optimisation at the same time as optimising the weights. I have generally found that training networks (with regularisation) from a number (say 20) of random initial weight vectors and then forming an ensemble of all the resulting networks is generally as good an approach as any. Essentially optimisation is the root of all evil in machine learning, the more of it you do, the more likely you are to end up over-fitting the data.
Backpropagation vs Genetic Algorithm for Neural Network training One of the key problems with neural networks is over-fitting, which means that algorithms that try very hard to find a network that minimises some criterion based on a finite sample of data will end u
7,236
Backpropagation vs Genetic Algorithm for Neural Network training
Whenever you deal with huge amounts of data and you want to solve a supervised learning task with a feed-forward neural network, solutions based on backpropagation are much more feasible. The reason for this is, that for a complex neural network, the number of free parameters is very high. One industry project I am currently working on involves a feed-forward neural network with about 1000 inputs, two hidden layers @ 384 neurons each and 60 outputs. This leads to 1000*384 + 384*384 + 384*60 = 554496 weight parameters which are to be optimized. Using a GA approach here would be terribly slow.
Backpropagation vs Genetic Algorithm for Neural Network training
Whenever you deal with huge amounts of data and you want to solve a supervised learning task with a feed-forward neural network, solutions based on backpropagation are much more feasible. The reason f
Backpropagation vs Genetic Algorithm for Neural Network training Whenever you deal with huge amounts of data and you want to solve a supervised learning task with a feed-forward neural network, solutions based on backpropagation are much more feasible. The reason for this is, that for a complex neural network, the number of free parameters is very high. One industry project I am currently working on involves a feed-forward neural network with about 1000 inputs, two hidden layers @ 384 neurons each and 60 outputs. This leads to 1000*384 + 384*384 + 384*60 = 554496 weight parameters which are to be optimized. Using a GA approach here would be terribly slow.
Backpropagation vs Genetic Algorithm for Neural Network training Whenever you deal with huge amounts of data and you want to solve a supervised learning task with a feed-forward neural network, solutions based on backpropagation are much more feasible. The reason f
7,237
Backpropagation vs Genetic Algorithm for Neural Network training
Second answer is wrong. Overfitting isn't caused by optimization. Overfitting happens when your model is over-complicated and can fit all the datapoints without learning the actual rule that created them (i.e. just memorizing them, in the extreme case.) There are many ways to prevent overfitting such as choosing simpler models, dropout, dropconnect, weight decay, and just using more data. The goal should be to optimize your network and make it as accurate as possible, taking those constraints into account. To answer the question, backprop is supposedly much faster than stochastic optimization (genetic algorithms and the like.) My guess is this is because it takes advantage of what the actual output was supposed to be, adjusts the weights in the right direction based on that, where stochastic optimization tries completely random changes and ignores that information. However by exploring a larger area, GAs will probably do better in the long run by avoiding local optimas, it will just take longer to train. I am curious how much slower GAs are than backprop, and if anyone knows of hybrid algorithms (scatter search seems like it would be ideal for this.)
Backpropagation vs Genetic Algorithm for Neural Network training
Second answer is wrong. Overfitting isn't caused by optimization. Overfitting happens when your model is over-complicated and can fit all the datapoints without learning the actual rule that created t
Backpropagation vs Genetic Algorithm for Neural Network training Second answer is wrong. Overfitting isn't caused by optimization. Overfitting happens when your model is over-complicated and can fit all the datapoints without learning the actual rule that created them (i.e. just memorizing them, in the extreme case.) There are many ways to prevent overfitting such as choosing simpler models, dropout, dropconnect, weight decay, and just using more data. The goal should be to optimize your network and make it as accurate as possible, taking those constraints into account. To answer the question, backprop is supposedly much faster than stochastic optimization (genetic algorithms and the like.) My guess is this is because it takes advantage of what the actual output was supposed to be, adjusts the weights in the right direction based on that, where stochastic optimization tries completely random changes and ignores that information. However by exploring a larger area, GAs will probably do better in the long run by avoiding local optimas, it will just take longer to train. I am curious how much slower GAs are than backprop, and if anyone knows of hybrid algorithms (scatter search seems like it would be ideal for this.)
Backpropagation vs Genetic Algorithm for Neural Network training Second answer is wrong. Overfitting isn't caused by optimization. Overfitting happens when your model is over-complicated and can fit all the datapoints without learning the actual rule that created t
7,238
Backpropagation vs Genetic Algorithm for Neural Network training
imho the difference between GA and backpropagation is that GA is based on random numbers and that backpropagation is based on a static algorithm such as stochastic gradient descent. GA being based on random numbers and add to that mutation means that it would likely avoid being caught in a local minima. But then GA being based on random numbers means that it is fairly likely for 2 different times you run the learning on the same network, it may reach a different conclusion i.e. a different set of weights
Backpropagation vs Genetic Algorithm for Neural Network training
imho the difference between GA and backpropagation is that GA is based on random numbers and that backpropagation is based on a static algorithm such as stochastic gradient descent. GA being based on
Backpropagation vs Genetic Algorithm for Neural Network training imho the difference between GA and backpropagation is that GA is based on random numbers and that backpropagation is based on a static algorithm such as stochastic gradient descent. GA being based on random numbers and add to that mutation means that it would likely avoid being caught in a local minima. But then GA being based on random numbers means that it is fairly likely for 2 different times you run the learning on the same network, it may reach a different conclusion i.e. a different set of weights
Backpropagation vs Genetic Algorithm for Neural Network training imho the difference between GA and backpropagation is that GA is based on random numbers and that backpropagation is based on a static algorithm such as stochastic gradient descent. GA being based on
7,239
Backpropagation vs Genetic Algorithm for Neural Network training
To answer your question, I have tried to write selective algo to train mnist without tensorflow or pytorch. Only using numpy. It works but terribly slow. So tried utilizing gpu using cupy. I optimized my code. I wrote a custom kernel to cupy etc etc ... So I have invented binary minist :P . Very small dataset only with ones and zeros. And tried GA algorithm with this. It worked The solution I have reached is backpropagation math and code applicaiton is far easier to understand and track to see what is going on. Selective algorithms are easier to coder harder to track (what is going on how it evolves) Main part of code: My results:
Backpropagation vs Genetic Algorithm for Neural Network training
To answer your question, I have tried to write selective algo to train mnist without tensorflow or pytorch. Only using numpy. It works but terribly slow. So tried utilizing gpu using cupy. I optimiz
Backpropagation vs Genetic Algorithm for Neural Network training To answer your question, I have tried to write selective algo to train mnist without tensorflow or pytorch. Only using numpy. It works but terribly slow. So tried utilizing gpu using cupy. I optimized my code. I wrote a custom kernel to cupy etc etc ... So I have invented binary minist :P . Very small dataset only with ones and zeros. And tried GA algorithm with this. It worked The solution I have reached is backpropagation math and code applicaiton is far easier to understand and track to see what is going on. Selective algorithms are easier to coder harder to track (what is going on how it evolves) Main part of code: My results:
Backpropagation vs Genetic Algorithm for Neural Network training To answer your question, I have tried to write selective algo to train mnist without tensorflow or pytorch. Only using numpy. It works but terribly slow. So tried utilizing gpu using cupy. I optimiz
7,240
Are decision trees almost always binary trees?
This is mainly a technical issue: if you don't restrict to binary choices, there are simply too many possibilities for the next split in the tree. So you are definitely right in all the points made in your question. Be aware that most tree-type algorithms work stepwise and are even as such not guaranteed to give the best possible result. This is just one extra caveat. For most practical purposes, though not during the building/pruning of the tree, the two kinds of splits are equivalent, though, given that they appear immediately after each other.
Are decision trees almost always binary trees?
This is mainly a technical issue: if you don't restrict to binary choices, there are simply too many possibilities for the next split in the tree. So you are definitely right in all the points made in
Are decision trees almost always binary trees? This is mainly a technical issue: if you don't restrict to binary choices, there are simply too many possibilities for the next split in the tree. So you are definitely right in all the points made in your question. Be aware that most tree-type algorithms work stepwise and are even as such not guaranteed to give the best possible result. This is just one extra caveat. For most practical purposes, though not during the building/pruning of the tree, the two kinds of splits are equivalent, though, given that they appear immediately after each other.
Are decision trees almost always binary trees? This is mainly a technical issue: if you don't restrict to binary choices, there are simply too many possibilities for the next split in the tree. So you are definitely right in all the points made in
7,241
Are decision trees almost always binary trees?
A two-way split followed by another two-way split on one of the children is not the same thing as a single three-way split I'm not sure what you mean here. Any multi-way split can be represented as a series of two-way splits. For a three-way split, you can split into A, B, and C by first splitting into A&B versus C and then splitting out A from B. A given algorithm might not choose that particular sequence (especially if, like most algorithms, it's greedy), but it certainly could. And if any randomization or stagewise procedures are done like in random forests or boosted trees, the chances of finding the right sequence of splits goes up. As others have pointed out, multi-way splits are computationally costly, so given these alternatives, most researchers seem to have chosen binary splits. Hope this helps
Are decision trees almost always binary trees?
A two-way split followed by another two-way split on one of the children is not the same thing as a single three-way split I'm not sure what you mean here. Any multi-way split can be represented as a
Are decision trees almost always binary trees? A two-way split followed by another two-way split on one of the children is not the same thing as a single three-way split I'm not sure what you mean here. Any multi-way split can be represented as a series of two-way splits. For a three-way split, you can split into A, B, and C by first splitting into A&B versus C and then splitting out A from B. A given algorithm might not choose that particular sequence (especially if, like most algorithms, it's greedy), but it certainly could. And if any randomization or stagewise procedures are done like in random forests or boosted trees, the chances of finding the right sequence of splits goes up. As others have pointed out, multi-way splits are computationally costly, so given these alternatives, most researchers seem to have chosen binary splits. Hope this helps
Are decision trees almost always binary trees? A two-way split followed by another two-way split on one of the children is not the same thing as a single three-way split I'm not sure what you mean here. Any multi-way split can be represented as a
7,242
Are decision trees almost always binary trees?
Regarding uses of decision tree and splitting (binary versus otherwise), I only know of CHAID that has non-binary splits but there are likely others. For me, the main use of a non binary split is in data mining exercises where I am looking at how to optimally bin a nominal variable with many levels. A series of binary splits is not as useful as a grouping done by CHAID.
Are decision trees almost always binary trees?
Regarding uses of decision tree and splitting (binary versus otherwise), I only know of CHAID that has non-binary splits but there are likely others. For me, the main use of a non binary split is in d
Are decision trees almost always binary trees? Regarding uses of decision tree and splitting (binary versus otherwise), I only know of CHAID that has non-binary splits but there are likely others. For me, the main use of a non binary split is in data mining exercises where I am looking at how to optimally bin a nominal variable with many levels. A series of binary splits is not as useful as a grouping done by CHAID.
Are decision trees almost always binary trees? Regarding uses of decision tree and splitting (binary versus otherwise), I only know of CHAID that has non-binary splits but there are likely others. For me, the main use of a non binary split is in d
7,243
Are decision trees almost always binary trees?
Please read this For practical reasons (combinatorial explosion) most libraries implement decision trees with binary splits. The nice thing is that they are NP-complete (Hyafil, Laurent, and Ronald L. Rivest. "Constructing optimal binary decision trees is NP-complete." Information Processing Letters 5.1 (1976): 15-17.)
Are decision trees almost always binary trees?
Please read this For practical reasons (combinatorial explosion) most libraries implement decision trees with binary splits. The nice thing is that they are NP-complete (Hyafil, Laurent, and Ronald L
Are decision trees almost always binary trees? Please read this For practical reasons (combinatorial explosion) most libraries implement decision trees with binary splits. The nice thing is that they are NP-complete (Hyafil, Laurent, and Ronald L. Rivest. "Constructing optimal binary decision trees is NP-complete." Information Processing Letters 5.1 (1976): 15-17.)
Are decision trees almost always binary trees? Please read this For practical reasons (combinatorial explosion) most libraries implement decision trees with binary splits. The nice thing is that they are NP-complete (Hyafil, Laurent, and Ronald L
7,244
Are decision trees almost always binary trees?
The Quinlan family of tree models (including the C4.5 you mention) makes higher-arity splits for nominal variables, one branch for each level.
Are decision trees almost always binary trees?
The Quinlan family of tree models (including the C4.5 you mention) makes higher-arity splits for nominal variables, one branch for each level.
Are decision trees almost always binary trees? The Quinlan family of tree models (including the C4.5 you mention) makes higher-arity splits for nominal variables, one branch for each level.
Are decision trees almost always binary trees? The Quinlan family of tree models (including the C4.5 you mention) makes higher-arity splits for nominal variables, one branch for each level.
7,245
Will the fact that my Italian son is going to attend a primary school change the expected number of Italian children to be present in his class?
As always you need to consider a probabilistic model that describes how the school distributes children among classes. Possibilities: The school takes care that all classes have the same number of foreign nationals. The school even tries to make certain that each nationality is represented roughly the same in every class. The school doesn't consider nationality at all and just distributes randomly or based on other criteria. All of these are reasonable. Given strategy 2 the answer to your question is no. When they use strategy 3, the expectation will be close to 3, but a bit smaller. That is because your son takes up a "slot", and you have one less chance for a random Italian. When the school uses strategy 1 the expectation also goes up; how much depends on the number of foreign nationals per class. Without knowing your school there is no way to answer this more perfectly. If you have just one class per year and the admission criteria are as described the answer would be the same as for 3 above. Calculating for 3 in detail: $$E(X) = 1 + E(B(29, 2/30)) = 1 + 1.9333 = 2.9333.$$ X is the number of Italian children in the class. The 1 comes from the known child, the 29 are the rest of the class and 2/30 is the probability for an unknown kid being Italian given what the school says. B is the binomial distribution. Note that starting with $E(X|X\geq1)$ does not give the proper answer, as knowing that a specific child is Italian violates the exchangeability assumed by the binomial distribution. Compare this with the boy or girl paradox, where it makes a difference whether you know that one child is a girl vs. knowing that the older child is a girl.
Will the fact that my Italian son is going to attend a primary school change the expected number of
As always you need to consider a probabilistic model that describes how the school distributes children among classes. Possibilities: The school takes care that all classes have the same number of fo
Will the fact that my Italian son is going to attend a primary school change the expected number of Italian children to be present in his class? As always you need to consider a probabilistic model that describes how the school distributes children among classes. Possibilities: The school takes care that all classes have the same number of foreign nationals. The school even tries to make certain that each nationality is represented roughly the same in every class. The school doesn't consider nationality at all and just distributes randomly or based on other criteria. All of these are reasonable. Given strategy 2 the answer to your question is no. When they use strategy 3, the expectation will be close to 3, but a bit smaller. That is because your son takes up a "slot", and you have one less chance for a random Italian. When the school uses strategy 1 the expectation also goes up; how much depends on the number of foreign nationals per class. Without knowing your school there is no way to answer this more perfectly. If you have just one class per year and the admission criteria are as described the answer would be the same as for 3 above. Calculating for 3 in detail: $$E(X) = 1 + E(B(29, 2/30)) = 1 + 1.9333 = 2.9333.$$ X is the number of Italian children in the class. The 1 comes from the known child, the 29 are the rest of the class and 2/30 is the probability for an unknown kid being Italian given what the school says. B is the binomial distribution. Note that starting with $E(X|X\geq1)$ does not give the proper answer, as knowing that a specific child is Italian violates the exchangeability assumed by the binomial distribution. Compare this with the boy or girl paradox, where it makes a difference whether you know that one child is a girl vs. knowing that the older child is a girl.
Will the fact that my Italian son is going to attend a primary school change the expected number of As always you need to consider a probabilistic model that describes how the school distributes children among classes. Possibilities: The school takes care that all classes have the same number of fo
7,246
Will the fact that my Italian son is going to attend a primary school change the expected number of Italian children to be present in his class?
Another way to look a this is at the level of individual children. Assuming that 30 children drawn randomly from a population (which you've indicated we can), we can work backward to the rough probability of an Italian child being drawn from this population: $2/30$ = $1/15$. Given that we know that one of the 30 is Italian, we only have to compute the probability for the remaining children: $$29 \cdot 1/15 = 29/15 = 1.933\ldots$$ So, knowing that your child is Italian changes the expected number of Italian children in the class to approximately 2.933, which is much closer to 3 than 2.
Will the fact that my Italian son is going to attend a primary school change the expected number of
Another way to look a this is at the level of individual children. Assuming that 30 children drawn randomly from a population (which you've indicated we can), we can work backward to the rough probabi
Will the fact that my Italian son is going to attend a primary school change the expected number of Italian children to be present in his class? Another way to look a this is at the level of individual children. Assuming that 30 children drawn randomly from a population (which you've indicated we can), we can work backward to the rough probability of an Italian child being drawn from this population: $2/30$ = $1/15$. Given that we know that one of the 30 is Italian, we only have to compute the probability for the remaining children: $$29 \cdot 1/15 = 29/15 = 1.933\ldots$$ So, knowing that your child is Italian changes the expected number of Italian children in the class to approximately 2.933, which is much closer to 3 than 2.
Will the fact that my Italian son is going to attend a primary school change the expected number of Another way to look a this is at the level of individual children. Assuming that 30 children drawn randomly from a population (which you've indicated we can), we can work backward to the rough probabi
7,247
Will the fact that my Italian son is going to attend a primary school change the expected number of Italian children to be present in his class?
Here's my thoughts on how to approach this: Let the random variable $S_n$ denote the number of Italian children in a class that is currently of size $n$. Let $X$ be the indicator for a new child's being Italian. Suppose that we add child $X$ to this class. Then the expected number of Italian children in this augmented class of size $n+1$ is $\mathbb E(S_n + X) = \mathbb E(S_n) + \mathbb E(X) = \mathbb E(S_n) + \mathbb P(X = 1)$. Note that independence doesn't matter here since we're only using the linearity of expectation. If child $X$ is known to be Italian then $X = 1$ with probability 1 so we have increased the expected value by 1.
Will the fact that my Italian son is going to attend a primary school change the expected number of
Here's my thoughts on how to approach this: Let the random variable $S_n$ denote the number of Italian children in a class that is currently of size $n$. Let $X$ be the indicator for a new child's be
Will the fact that my Italian son is going to attend a primary school change the expected number of Italian children to be present in his class? Here's my thoughts on how to approach this: Let the random variable $S_n$ denote the number of Italian children in a class that is currently of size $n$. Let $X$ be the indicator for a new child's being Italian. Suppose that we add child $X$ to this class. Then the expected number of Italian children in this augmented class of size $n+1$ is $\mathbb E(S_n + X) = \mathbb E(S_n) + \mathbb E(X) = \mathbb E(S_n) + \mathbb P(X = 1)$. Note that independence doesn't matter here since we're only using the linearity of expectation. If child $X$ is known to be Italian then $X = 1$ with probability 1 so we have increased the expected value by 1.
Will the fact that my Italian son is going to attend a primary school change the expected number of Here's my thoughts on how to approach this: Let the random variable $S_n$ denote the number of Italian children in a class that is currently of size $n$. Let $X$ be the indicator for a new child's be
7,248
Will the fact that my Italian son is going to attend a primary school change the expected number of Italian children to be present in his class?
Based on the Admission Office info, the number of Italian children follows binomial $\mathrm{Binom}(30, 2/30)$, assuming independence. Now you know in your class, there is at least one Italian child, so the expectation becomes $\mathbb{E}(X|X\geq1)$. For $X\sim \mathrm{Binom}(30, 2/30)$, this evaluates to $2.28$ (if I get my calculation right). Edit. Evaluation of the expectation: $$E[X|X\geq1]=\sum_{i=0}^{30}iP(X=i|X\geq1)=\sum_0^{30}i\cdot \frac{P(X=i, X\geq1)}{P(X\geq1)}=\sum_1^{30}i\cdot \frac{P(i)}{1-P(0)}$$ (note the change in summation lower bound at last step)
Will the fact that my Italian son is going to attend a primary school change the expected number of
Based on the Admission Office info, the number of Italian children follows binomial $\mathrm{Binom}(30, 2/30)$, assuming independence. Now you know in your class, there is at least one Italian child,
Will the fact that my Italian son is going to attend a primary school change the expected number of Italian children to be present in his class? Based on the Admission Office info, the number of Italian children follows binomial $\mathrm{Binom}(30, 2/30)$, assuming independence. Now you know in your class, there is at least one Italian child, so the expectation becomes $\mathbb{E}(X|X\geq1)$. For $X\sim \mathrm{Binom}(30, 2/30)$, this evaluates to $2.28$ (if I get my calculation right). Edit. Evaluation of the expectation: $$E[X|X\geq1]=\sum_{i=0}^{30}iP(X=i|X\geq1)=\sum_0^{30}i\cdot \frac{P(X=i, X\geq1)}{P(X\geq1)}=\sum_1^{30}i\cdot \frac{P(i)}{1-P(0)}$$ (note the change in summation lower bound at last step)
Will the fact that my Italian son is going to attend a primary school change the expected number of Based on the Admission Office info, the number of Italian children follows binomial $\mathrm{Binom}(30, 2/30)$, assuming independence. Now you know in your class, there is at least one Italian child,
7,249
Will the fact that my Italian son is going to attend a primary school change the expected number of Italian children to be present in his class?
No. Your knowledge of the impending events changes nothing about the school's typical experience.
Will the fact that my Italian son is going to attend a primary school change the expected number of
No. Your knowledge of the impending events changes nothing about the school's typical experience.
Will the fact that my Italian son is going to attend a primary school change the expected number of Italian children to be present in his class? No. Your knowledge of the impending events changes nothing about the school's typical experience.
Will the fact that my Italian son is going to attend a primary school change the expected number of No. Your knowledge of the impending events changes nothing about the school's typical experience.
7,250
Am I creating bias by using the same random seed over and over?
There is no bias if the RNG is any good. By always using the same seed you are, however, creating a strong interdependence among all the simulations you perform in your career. This creates an unusual kind of risk. By using the same seed each time, either you are always getting a pretty nice pseudorandom sequence and all your work goes well or--with very low but non-zero probability--you are always using a pretty bad sequence and your simulations are not as representative of the underlying distributions as you think they might be. Either all your work is pretty good or all of it is pretty lousy! Contrast this with using truly random starting seeds each time. Once in a very long while you might obtain a sequence of random values that is not representative of the distribution you are modeling, but most of the time you would be just fine. If you never attempted to reproduce your own work (with a new seed), then once or twice in your career you might get misleading results, but the vast majority of the time you will be ok. There is a simple and obvious cure: Always, always check your work by restarting with another seed. It's virtually impossible that two seeds accidentally will give misleading results in the same way. On the other hand, there is extraordinary merit in having a well-known "personal seed": it shows the world you are being honest. A sly, subtle way to lie with simulations is to repeat them until they give you a predetermined outcome. Here's a working R example to "demonstrate" that even a fair coin is highly likely to land heads more than half the time: n.flips <- 100 seeds <- 1:10^3 # # Run some preliminary simulations. # results <- sapply(seeds, function(seed) { set.seed(seed) mean(runif(n.flips) > 1/2) }) # # Now do the "real" simulation. # seed <- seeds[which.max(results)] set.seed(seed) x <- mean(runif(n.flips) > 1/2) z <- (x - 1/2) * 2 * sqrt(n) cat("Mean:", x, "Z:", z, "p-value:", pnorm(z, lower.tail=FALSE), "\n") By looking at a wider range of seeds (from $1$ through $10^6$), I was able to find a congenial one: 218134. When you start with this as the seed, the resulting $100$ simulated coin flips exhibit $75$ heads! That is significantly different from the expected value of $50$ ($p=0.000004$). The implications can be fascinating and important. For instance, if I knew in advance whom I would be recruiting into a randomized double-blind controlled trial, and in what order (which I might be able to control as a university professor testing a group of captive undergraduates or lab rats), then beforehand I could run such a set of simulations to find a seed that groups the students more to my liking to favor whatever I was hoping to "prove." I could include the planned order and that seed in my experimental plan before conducting the experiment, thereby creating a procedure that no critical reviewer could ever impeach--but nevertheless stacking the deck in my favor. (I believe there are entire branches of pseudoscience that use some variant of this trick to gain credibility. Would you believe I actually used ESP to control the computer? I can do it at a distance with yours, too!) Somebody whose default seed is known cannot play this game. My personal seed is 17, as a large proportion of my posts attest (currently 155 out of 161 posts that set a seed use this one). In R it is a difficult seed to work with, because (as it turns out) most small datasets I create with it have a strong outlier. That's not a bad characteristic ... .
Am I creating bias by using the same random seed over and over?
There is no bias if the RNG is any good. By always using the same seed you are, however, creating a strong interdependence among all the simulations you perform in your career. This creates an unusu
Am I creating bias by using the same random seed over and over? There is no bias if the RNG is any good. By always using the same seed you are, however, creating a strong interdependence among all the simulations you perform in your career. This creates an unusual kind of risk. By using the same seed each time, either you are always getting a pretty nice pseudorandom sequence and all your work goes well or--with very low but non-zero probability--you are always using a pretty bad sequence and your simulations are not as representative of the underlying distributions as you think they might be. Either all your work is pretty good or all of it is pretty lousy! Contrast this with using truly random starting seeds each time. Once in a very long while you might obtain a sequence of random values that is not representative of the distribution you are modeling, but most of the time you would be just fine. If you never attempted to reproduce your own work (with a new seed), then once or twice in your career you might get misleading results, but the vast majority of the time you will be ok. There is a simple and obvious cure: Always, always check your work by restarting with another seed. It's virtually impossible that two seeds accidentally will give misleading results in the same way. On the other hand, there is extraordinary merit in having a well-known "personal seed": it shows the world you are being honest. A sly, subtle way to lie with simulations is to repeat them until they give you a predetermined outcome. Here's a working R example to "demonstrate" that even a fair coin is highly likely to land heads more than half the time: n.flips <- 100 seeds <- 1:10^3 # # Run some preliminary simulations. # results <- sapply(seeds, function(seed) { set.seed(seed) mean(runif(n.flips) > 1/2) }) # # Now do the "real" simulation. # seed <- seeds[which.max(results)] set.seed(seed) x <- mean(runif(n.flips) > 1/2) z <- (x - 1/2) * 2 * sqrt(n) cat("Mean:", x, "Z:", z, "p-value:", pnorm(z, lower.tail=FALSE), "\n") By looking at a wider range of seeds (from $1$ through $10^6$), I was able to find a congenial one: 218134. When you start with this as the seed, the resulting $100$ simulated coin flips exhibit $75$ heads! That is significantly different from the expected value of $50$ ($p=0.000004$). The implications can be fascinating and important. For instance, if I knew in advance whom I would be recruiting into a randomized double-blind controlled trial, and in what order (which I might be able to control as a university professor testing a group of captive undergraduates or lab rats), then beforehand I could run such a set of simulations to find a seed that groups the students more to my liking to favor whatever I was hoping to "prove." I could include the planned order and that seed in my experimental plan before conducting the experiment, thereby creating a procedure that no critical reviewer could ever impeach--but nevertheless stacking the deck in my favor. (I believe there are entire branches of pseudoscience that use some variant of this trick to gain credibility. Would you believe I actually used ESP to control the computer? I can do it at a distance with yours, too!) Somebody whose default seed is known cannot play this game. My personal seed is 17, as a large proportion of my posts attest (currently 155 out of 161 posts that set a seed use this one). In R it is a difficult seed to work with, because (as it turns out) most small datasets I create with it have a strong outlier. That's not a bad characteristic ... .
Am I creating bias by using the same random seed over and over? There is no bias if the RNG is any good. By always using the same seed you are, however, creating a strong interdependence among all the simulations you perform in your career. This creates an unusu
7,251
Am I creating bias by using the same random seed over and over?
As stated above, a good RNG will not generate bias under from using the same seed. However, there will be a correlation among the results. (The same pseudo-random number will start each computation.) Whether this matters isn't a matter of mathematics. Using the same seed is OK at times: for debugging or when you know you want correlated results.
Am I creating bias by using the same random seed over and over?
As stated above, a good RNG will not generate bias under from using the same seed. However, there will be a correlation among the results. (The same pseudo-random number will start each computation.)
Am I creating bias by using the same random seed over and over? As stated above, a good RNG will not generate bias under from using the same seed. However, there will be a correlation among the results. (The same pseudo-random number will start each computation.) Whether this matters isn't a matter of mathematics. Using the same seed is OK at times: for debugging or when you know you want correlated results.
Am I creating bias by using the same random seed over and over? As stated above, a good RNG will not generate bias under from using the same seed. However, there will be a correlation among the results. (The same pseudo-random number will start each computation.)
7,252
What is a good use of the 'comment' function in R?
To second @Gavin, Frank Harrell has developed efficient ways to handle annotated data.frame in R in his Hmisc package. For example, the label() and units() functions allow to add dedicated attributes to R objects. I find them very handy when producing summary of data.frame (e.g., with describe()). Another useful way of using such an extra attribute is to apply a timestamp on a data set. I also add an attribute for things like random seed, fold number (when I use k-kold or LOO cross-validation).
What is a good use of the 'comment' function in R?
To second @Gavin, Frank Harrell has developed efficient ways to handle annotated data.frame in R in his Hmisc package. For example, the label() and units() functions allow to add dedicated attributes
What is a good use of the 'comment' function in R? To second @Gavin, Frank Harrell has developed efficient ways to handle annotated data.frame in R in his Hmisc package. For example, the label() and units() functions allow to add dedicated attributes to R objects. I find them very handy when producing summary of data.frame (e.g., with describe()). Another useful way of using such an extra attribute is to apply a timestamp on a data set. I also add an attribute for things like random seed, fold number (when I use k-kold or LOO cross-validation).
What is a good use of the 'comment' function in R? To second @Gavin, Frank Harrell has developed efficient ways to handle annotated data.frame in R in his Hmisc package. For example, the label() and units() functions allow to add dedicated attributes
7,253
What is a good use of the 'comment' function in R?
One thing I often find myself doing in my R scripts for a particular data analysis task is to include comments in the script about the units of variables in my data frames. I work with environmental data and chemists and ecologists seem to enjoy using a wide range of different units for the same things (mg L$^{-1}$ vs mu eq L$^{-1}$, etc). My colleagues usually store this information in the row immediately below the column names in Excel sheets. I'd see comment() as a nice way of attaching this information to a data frame for future reference.
What is a good use of the 'comment' function in R?
One thing I often find myself doing in my R scripts for a particular data analysis task is to include comments in the script about the units of variables in my data frames. I work with environmental d
What is a good use of the 'comment' function in R? One thing I often find myself doing in my R scripts for a particular data analysis task is to include comments in the script about the units of variables in my data frames. I work with environmental data and chemists and ecologists seem to enjoy using a wide range of different units for the same things (mg L$^{-1}$ vs mu eq L$^{-1}$, etc). My colleagues usually store this information in the row immediately below the column names in Excel sheets. I'd see comment() as a nice way of attaching this information to a data frame for future reference.
What is a good use of the 'comment' function in R? One thing I often find myself doing in my R scripts for a particular data analysis task is to include comments in the script about the units of variables in my data frames. I work with environmental d
7,254
What is a good use of the 'comment' function in R?
Similar facilities exist in other packages, such as the -notes- command in Stata. We use this to document full details of a variable, e.g. details of assay for a biochemical measurement, or exact wording of the question asked for questionnaire data. This is often too much info for the variable name or label, one or both of which are displayed in the output of every analysis involving the variable and are therefore best kept reasonably short.
What is a good use of the 'comment' function in R?
Similar facilities exist in other packages, such as the -notes- command in Stata. We use this to document full details of a variable, e.g. details of assay for a biochemical measurement, or exact word
What is a good use of the 'comment' function in R? Similar facilities exist in other packages, such as the -notes- command in Stata. We use this to document full details of a variable, e.g. details of assay for a biochemical measurement, or exact wording of the question asked for questionnaire data. This is often too much info for the variable name or label, one or both of which are displayed in the output of every analysis involving the variable and are therefore best kept reasonably short.
What is a good use of the 'comment' function in R? Similar facilities exist in other packages, such as the -notes- command in Stata. We use this to document full details of a variable, e.g. details of assay for a biochemical measurement, or exact word
7,255
What is a good use of the 'comment' function in R?
One of the things I find myself doing a lot is tracking the commands used to generate data and objects, and have found the comment to be a useful tool for this. The 'matched.call.data' and 'generate.command.string' do the trick. Not perfect, but helpful and a use for 'comment()'. :) # Comments only accept strings... # Substituting the escaped quotes ('\"') makes it prettier. generate.command.string <- function( matched.call.data ) { command.string <- as.character( bquote( .( list( matched.call.data ) ) ) ) sapply( bquote( .(command.string) ), USE.NAMES=FALSE, function( x ) gsub( "\\\"", "\'", as.list( match.call() )$x )[[2]] ) } # Some generating function... generate.matrix <- function( nrows, ncols, data=NA ) { # Some generated object mat <- matrix( data= data, nrow= nrows, ncol= ncols ) matched.call.data <- do.call( "call", c( list( as.character( match.call()[[1]] ) ), lapply( as.list( match.call() )[-1], eval ) ) ) comment( mat ) <- c( Generated= date(), Command = generate.command.string( matched.call.data ) ) mat } # Generate an object with a missing argument. emptyMat <- generate.matrix( nrows=2, ncols=2 ) comment( emptyMat ) # Generate without formally stating arguments. dataMat <- generate.matrix( 2, 2, sample(1:4, 4, replace= TRUE ) ) comment( dataMat ) # And with a longer command. charMat <- generate.matrix( 3, 3, c( 'This', 'is', 'a', 'much', 'longer', 'argument', 'section', 'that', 'wraps') ) comment( charMat ) # And with a variable. myData <- c( 'An', 'expanded', 'command', 'argument') charMat2 <- generate.matrix( 2, 2, myData ) comment( charMat2 ) # Create a new object from an original command. Sys.sleep(1) emptyMat2 <- eval( parse( text= comment( emptyMat )[['Command']] ) ) dataMat2 <- eval( parse( text= comment( emptyMat )[['Command']] ) ) # Check equality of the static matrices. identical( emptyMat, emptyMat2 ) # The generation dates are different. all.equal( emptyMat, emptyMat2, check.attributes= FALSE ) comment( emptyMat )['Generated'] <- NA comment( emptyMat2 )['Generated'] <- NA identical( emptyMat, emptyMat2 ) # Command argument structure still works too. str( as.list( match.call( generate.matrix, parse( text=comment( charMat2 )[[ 'Command' ]] ) ) )[-1] )
What is a good use of the 'comment' function in R?
One of the things I find myself doing a lot is tracking the commands used to generate data and objects, and have found the comment to be a useful tool for this. The 'matched.call.data' and 'generate.c
What is a good use of the 'comment' function in R? One of the things I find myself doing a lot is tracking the commands used to generate data and objects, and have found the comment to be a useful tool for this. The 'matched.call.data' and 'generate.command.string' do the trick. Not perfect, but helpful and a use for 'comment()'. :) # Comments only accept strings... # Substituting the escaped quotes ('\"') makes it prettier. generate.command.string <- function( matched.call.data ) { command.string <- as.character( bquote( .( list( matched.call.data ) ) ) ) sapply( bquote( .(command.string) ), USE.NAMES=FALSE, function( x ) gsub( "\\\"", "\'", as.list( match.call() )$x )[[2]] ) } # Some generating function... generate.matrix <- function( nrows, ncols, data=NA ) { # Some generated object mat <- matrix( data= data, nrow= nrows, ncol= ncols ) matched.call.data <- do.call( "call", c( list( as.character( match.call()[[1]] ) ), lapply( as.list( match.call() )[-1], eval ) ) ) comment( mat ) <- c( Generated= date(), Command = generate.command.string( matched.call.data ) ) mat } # Generate an object with a missing argument. emptyMat <- generate.matrix( nrows=2, ncols=2 ) comment( emptyMat ) # Generate without formally stating arguments. dataMat <- generate.matrix( 2, 2, sample(1:4, 4, replace= TRUE ) ) comment( dataMat ) # And with a longer command. charMat <- generate.matrix( 3, 3, c( 'This', 'is', 'a', 'much', 'longer', 'argument', 'section', 'that', 'wraps') ) comment( charMat ) # And with a variable. myData <- c( 'An', 'expanded', 'command', 'argument') charMat2 <- generate.matrix( 2, 2, myData ) comment( charMat2 ) # Create a new object from an original command. Sys.sleep(1) emptyMat2 <- eval( parse( text= comment( emptyMat )[['Command']] ) ) dataMat2 <- eval( parse( text= comment( emptyMat )[['Command']] ) ) # Check equality of the static matrices. identical( emptyMat, emptyMat2 ) # The generation dates are different. all.equal( emptyMat, emptyMat2, check.attributes= FALSE ) comment( emptyMat )['Generated'] <- NA comment( emptyMat2 )['Generated'] <- NA identical( emptyMat, emptyMat2 ) # Command argument structure still works too. str( as.list( match.call( generate.matrix, parse( text=comment( charMat2 )[[ 'Command' ]] ) ) )[-1] )
What is a good use of the 'comment' function in R? One of the things I find myself doing a lot is tracking the commands used to generate data and objects, and have found the comment to be a useful tool for this. The 'matched.call.data' and 'generate.c
7,256
What is a good use of the 'comment' function in R?
Allow me to suggest my general solution to object management in R: the repo package. Using it, you can assign each variable a long name, a description, a set of tags, a remote url, dependency relations and also attach figures or generic external files. For example, source code can be stored as a repository item and attached to resources produced by it. Find the latest stable release on CRAN (install.packages("repo")) or the latest development on github. A quick overview here. Hope it helps.
What is a good use of the 'comment' function in R?
Allow me to suggest my general solution to object management in R: the repo package. Using it, you can assign each variable a long name, a description, a set of tags, a remote url, dependency relation
What is a good use of the 'comment' function in R? Allow me to suggest my general solution to object management in R: the repo package. Using it, you can assign each variable a long name, a description, a set of tags, a remote url, dependency relations and also attach figures or generic external files. For example, source code can be stored as a repository item and attached to resources produced by it. Find the latest stable release on CRAN (install.packages("repo")) or the latest development on github. A quick overview here. Hope it helps.
What is a good use of the 'comment' function in R? Allow me to suggest my general solution to object management in R: the repo package. Using it, you can assign each variable a long name, a description, a set of tags, a remote url, dependency relation
7,257
Bootstrap prediction interval
The method laid out below is the one described in Section 6.3.3 of Davidson and Hinckley (1997), Bootstrap Methods and Their Application. Thanks to Glen_b and his comment here. Given that there were several questions on Cross Validated on this topic, I thought it was worth writing up. The linear regression model is: \begin{align} Y_i &= X_i\beta+\epsilon_i \end{align} We have data, $i=1,2,\ldots,N$, which we use to estimate the $\beta$ as: \begin{align} \hat{\beta}_{\text{OLS}} &= \left( X'X \right)^{-1}X'Y \end{align} Now, we want to predict what $Y$ will be for a new data point, given that we know $X$ for it. This is the prediction problem. Let's call the new $X$ (which we know) $X_{N+1}$ and the new $Y$ (which we would like to predict), $Y_{N+1}$. The usual prediction (if we are assuming that the $\epsilon_i$ are iid and uncorrelated with $X$) is: \begin{align} Y^p_{N+1} &= X_{N+1}\hat{\beta}_{\text{OLS}} \end{align} The forecast error made by this prediction is: \begin{align} e^p_{N+1} &= Y_{N+1}-Y^p_{N+1} \end{align} We can re-write this equation like: \begin{align} Y_{N+1} &= Y^p_{N+1} + e^p_{N+1} \end{align} Now, $Y^p_{N+1}$ we have already calculated. So, if we want to bound $Y_{N+1}$ in an interval, say, 90% of the time, all we need to do is estimate consistently the $5^{th}$ and $95^{th}$ percentiles/quantiles of $e^p_{N+1}$, call them $e^5,e^{95}$, and the prediction interval will be $\left[Y^p_{N+1}+e^5,Y^p_{N+1}+e^{95} \right]$. How to estimate the quantiles/percentiles of $e^p_{N+1}$? Well, we can write: \begin{align} e^p_{N+1} &= Y_{N+1}-Y^p_{N+1}\\ &= X_{N+1}\beta + \epsilon_{N+1} - X_{N+1}\hat{\beta}_{\text{OLS}}\\ &= X_{N+1}\left( \beta-\hat{\beta}_{\text{OLS}} \right) + \epsilon_{N+1} \end{align} The strategy will be to sample (in a bootstrap kind of way) many times from $e^p_{N+1}$ and then calculate percentiles in the usual way. So, maybe we will sample 10,000 times from $e^p_{N+1}$, and then estimate the $5^{th}$ and $95^{th}$ percentiles as the $500^{th}$ and $9,500^{th}$ smallest members of the sample. To draw on $X_{N+1}\left( \beta-\hat{\beta}_{\text{OLS}} \right)$, we can bootstrap errors (cases would be fine, too, but we are assuming iid errors anyway). So, on each bootstrap replication, you draw $N$ times with replacement from the variance-adjusted residuals (see next para) to get $\epsilon^*_i$, then make new $Y^*_i=X_i\hat{\beta}_{\text{OLS}}+\epsilon^*_i$, then run OLS on the new dataset, $\left(Y^*,X \right)$ to get this replication's $\beta^*_r$. At last, this replication's draw on $X_{N+1}\left( \beta-\hat{\beta}_{\text{OLS}} \right)$ is $X_{N+1}\left( \hat{\beta}_{\text{OLS}}-\beta^*_r \right)$ Given we are assuming iid $\epsilon$, the natural way to sample from the $\epsilon_{N+1}$ part of the equation is to use the residuals we have from the regression, $\left\{ e^*_1,e^*_2,\ldots,e^*_N \right\}$. Residuals have different and generally too small variances, so we will want to sample from $\left\{ s_1-\overline{s},s_2-\overline{s},\ldots,s_N-\overline{s} \right\}$, the variance-corrected residuals, where $s_i=e^*_i/\sqrt{(1-h_i)}$ and $h_i$ is the leverage of observation $i$. And, finally, the algorithm for making a 90% prediction interval for $Y_{N+1}$, given that $X$ is $X_{N+1}$ is: Make the prediction $Y^p_{N+1}=X_{N+1}\hat{\beta}_{\text{OLS}}$. Make the variance-adjusted residuals, $\left\{ s_1-\overline{s},s_2-\overline{s},\ldots,s_N-\overline{s}\right\}$, where $s_i=e_i/\sqrt(1-h_{i})$. For replications $r=1,2,\ldots,R$: Draw $N$ times on the adjusted residuals to make bootstrap residuals $\left\{\epsilon^*_1,\epsilon^*_2,\ldots,\epsilon^*_N \right\}$ Generate bootstrap $Y^*=X\hat{\beta}_{\text{OLS}}+\epsilon^*$ Calculate bootstrap OLS estimator for this replication, $\beta^*_r=\left( X'X \right)^{-1}X'Y^*$ Obtain bootstrap residuals from this replication, $e^*_r=Y^*-X\beta^*_r$ Calculate bootstrap variance-adjusted residuals from this replication, $s^*-\overline{s^*}$ Draw one of the bootstrap variance-adjusted residuals from this replication, $\epsilon^*_{N+1,r}$ Calculate this replication's draw on $e^p_{N+1}$, $e^{p*}_r=X_{N+1}\left( \hat{\beta}_{\text{OLS}}-\beta^*_r \right)+\epsilon^*_{N+1,r}$ Find $5^{th}$ and $95^{th}$ percentiles of $e^p_{N+1}$, $e^5,e^{95}$ 90% prediction interval for $Y_{N+1}$ is $\left[Y^p_{N+1}+e^5,Y^p_{N+1}+e^{95} \right]$. Here is R code: # This script gives an example of the procedure to construct a prediction interval # for a linear regression model using a bootstrap method. The method is the one # described in Section 6.3.3 of Davidson and Hinckley (1997), # _Bootstrap Methods and Their Application_. #rm(list=ls()) set.seed(12344321) library(MASS) library(Hmisc) # Generate bivariate regression data x <- runif(n=100,min=0,max=100) y <- 1 + x + (rexp(n=100,rate=0.25)-4) my.reg <- lm(y~x) summary(my.reg) # Predict y for x=78: y.p <- coef(my.reg)["(Intercept)"] + coef(my.reg)["x"]*78 y.p # Create adjusted residuals leverage <- influence(my.reg)$hat my.s.resid <- residuals(my.reg)/sqrt(1-leverage) my.s.resid <- my.s.resid - mean(my.s.resid) reg <- my.reg s <- my.s.resid the.replication <- function(reg,s,x_Np1=0){ # Make bootstrap residuals ep.star <- sample(s,size=length(reg$residuals),replace=TRUE) # Make bootstrap Y y.star <- fitted(reg)+ep.star # Do bootstrap regression x <- model.frame(reg)[,2] bs.reg <- lm(y.star~x) # Create bootstrapped adjusted residuals bs.lev <- influence(bs.reg)$hat bs.s <- residuals(bs.reg)/sqrt(1-bs.lev) bs.s <- bs.s - mean(bs.s) # Calculate draw on prediction error xb.xb <- coef(my.reg)["(Intercept)"] - coef(bs.reg)["(Intercept)"] xb.xb <- xb.xb + (coef(my.reg)["x"] - coef(bs.reg)["x"])*x_Np1 return(unname(xb.xb + sample(bs.s,size=1))) } # Do bootstrap with 10,000 replications ep.draws <- replicate(n=10000,the.replication(reg=my.reg,s=my.s.resid,x_Np1=78)) # Create prediction interval y.p+quantile(ep.draws,probs=c(0.05,0.95)) # prediction interval using normal assumption predict(my.reg,newdata=data.frame(x=78),interval="prediction",level=0.90) # Quick and dirty Monte Carlo to see which prediction interval is better # That is, what are the 5th and 95th percentiles of Y_{N+1} # # To do it properly, I guess we would want to do the whole procedure above # 10,000 times and then see what percentage of the time each prediction # interval covered Y_{N+1} y.np1 <- 1 + 78 + (rexp(n=10000,rate=0.25)-4) quantile(y.np1,probs=c(0.05,0.95))
Bootstrap prediction interval
The method laid out below is the one described in Section 6.3.3 of Davidson and Hinckley (1997), Bootstrap Methods and Their Application. Thanks to Glen_b and his comment here. Given that there were
Bootstrap prediction interval The method laid out below is the one described in Section 6.3.3 of Davidson and Hinckley (1997), Bootstrap Methods and Their Application. Thanks to Glen_b and his comment here. Given that there were several questions on Cross Validated on this topic, I thought it was worth writing up. The linear regression model is: \begin{align} Y_i &= X_i\beta+\epsilon_i \end{align} We have data, $i=1,2,\ldots,N$, which we use to estimate the $\beta$ as: \begin{align} \hat{\beta}_{\text{OLS}} &= \left( X'X \right)^{-1}X'Y \end{align} Now, we want to predict what $Y$ will be for a new data point, given that we know $X$ for it. This is the prediction problem. Let's call the new $X$ (which we know) $X_{N+1}$ and the new $Y$ (which we would like to predict), $Y_{N+1}$. The usual prediction (if we are assuming that the $\epsilon_i$ are iid and uncorrelated with $X$) is: \begin{align} Y^p_{N+1} &= X_{N+1}\hat{\beta}_{\text{OLS}} \end{align} The forecast error made by this prediction is: \begin{align} e^p_{N+1} &= Y_{N+1}-Y^p_{N+1} \end{align} We can re-write this equation like: \begin{align} Y_{N+1} &= Y^p_{N+1} + e^p_{N+1} \end{align} Now, $Y^p_{N+1}$ we have already calculated. So, if we want to bound $Y_{N+1}$ in an interval, say, 90% of the time, all we need to do is estimate consistently the $5^{th}$ and $95^{th}$ percentiles/quantiles of $e^p_{N+1}$, call them $e^5,e^{95}$, and the prediction interval will be $\left[Y^p_{N+1}+e^5,Y^p_{N+1}+e^{95} \right]$. How to estimate the quantiles/percentiles of $e^p_{N+1}$? Well, we can write: \begin{align} e^p_{N+1} &= Y_{N+1}-Y^p_{N+1}\\ &= X_{N+1}\beta + \epsilon_{N+1} - X_{N+1}\hat{\beta}_{\text{OLS}}\\ &= X_{N+1}\left( \beta-\hat{\beta}_{\text{OLS}} \right) + \epsilon_{N+1} \end{align} The strategy will be to sample (in a bootstrap kind of way) many times from $e^p_{N+1}$ and then calculate percentiles in the usual way. So, maybe we will sample 10,000 times from $e^p_{N+1}$, and then estimate the $5^{th}$ and $95^{th}$ percentiles as the $500^{th}$ and $9,500^{th}$ smallest members of the sample. To draw on $X_{N+1}\left( \beta-\hat{\beta}_{\text{OLS}} \right)$, we can bootstrap errors (cases would be fine, too, but we are assuming iid errors anyway). So, on each bootstrap replication, you draw $N$ times with replacement from the variance-adjusted residuals (see next para) to get $\epsilon^*_i$, then make new $Y^*_i=X_i\hat{\beta}_{\text{OLS}}+\epsilon^*_i$, then run OLS on the new dataset, $\left(Y^*,X \right)$ to get this replication's $\beta^*_r$. At last, this replication's draw on $X_{N+1}\left( \beta-\hat{\beta}_{\text{OLS}} \right)$ is $X_{N+1}\left( \hat{\beta}_{\text{OLS}}-\beta^*_r \right)$ Given we are assuming iid $\epsilon$, the natural way to sample from the $\epsilon_{N+1}$ part of the equation is to use the residuals we have from the regression, $\left\{ e^*_1,e^*_2,\ldots,e^*_N \right\}$. Residuals have different and generally too small variances, so we will want to sample from $\left\{ s_1-\overline{s},s_2-\overline{s},\ldots,s_N-\overline{s} \right\}$, the variance-corrected residuals, where $s_i=e^*_i/\sqrt{(1-h_i)}$ and $h_i$ is the leverage of observation $i$. And, finally, the algorithm for making a 90% prediction interval for $Y_{N+1}$, given that $X$ is $X_{N+1}$ is: Make the prediction $Y^p_{N+1}=X_{N+1}\hat{\beta}_{\text{OLS}}$. Make the variance-adjusted residuals, $\left\{ s_1-\overline{s},s_2-\overline{s},\ldots,s_N-\overline{s}\right\}$, where $s_i=e_i/\sqrt(1-h_{i})$. For replications $r=1,2,\ldots,R$: Draw $N$ times on the adjusted residuals to make bootstrap residuals $\left\{\epsilon^*_1,\epsilon^*_2,\ldots,\epsilon^*_N \right\}$ Generate bootstrap $Y^*=X\hat{\beta}_{\text{OLS}}+\epsilon^*$ Calculate bootstrap OLS estimator for this replication, $\beta^*_r=\left( X'X \right)^{-1}X'Y^*$ Obtain bootstrap residuals from this replication, $e^*_r=Y^*-X\beta^*_r$ Calculate bootstrap variance-adjusted residuals from this replication, $s^*-\overline{s^*}$ Draw one of the bootstrap variance-adjusted residuals from this replication, $\epsilon^*_{N+1,r}$ Calculate this replication's draw on $e^p_{N+1}$, $e^{p*}_r=X_{N+1}\left( \hat{\beta}_{\text{OLS}}-\beta^*_r \right)+\epsilon^*_{N+1,r}$ Find $5^{th}$ and $95^{th}$ percentiles of $e^p_{N+1}$, $e^5,e^{95}$ 90% prediction interval for $Y_{N+1}$ is $\left[Y^p_{N+1}+e^5,Y^p_{N+1}+e^{95} \right]$. Here is R code: # This script gives an example of the procedure to construct a prediction interval # for a linear regression model using a bootstrap method. The method is the one # described in Section 6.3.3 of Davidson and Hinckley (1997), # _Bootstrap Methods and Their Application_. #rm(list=ls()) set.seed(12344321) library(MASS) library(Hmisc) # Generate bivariate regression data x <- runif(n=100,min=0,max=100) y <- 1 + x + (rexp(n=100,rate=0.25)-4) my.reg <- lm(y~x) summary(my.reg) # Predict y for x=78: y.p <- coef(my.reg)["(Intercept)"] + coef(my.reg)["x"]*78 y.p # Create adjusted residuals leverage <- influence(my.reg)$hat my.s.resid <- residuals(my.reg)/sqrt(1-leverage) my.s.resid <- my.s.resid - mean(my.s.resid) reg <- my.reg s <- my.s.resid the.replication <- function(reg,s,x_Np1=0){ # Make bootstrap residuals ep.star <- sample(s,size=length(reg$residuals),replace=TRUE) # Make bootstrap Y y.star <- fitted(reg)+ep.star # Do bootstrap regression x <- model.frame(reg)[,2] bs.reg <- lm(y.star~x) # Create bootstrapped adjusted residuals bs.lev <- influence(bs.reg)$hat bs.s <- residuals(bs.reg)/sqrt(1-bs.lev) bs.s <- bs.s - mean(bs.s) # Calculate draw on prediction error xb.xb <- coef(my.reg)["(Intercept)"] - coef(bs.reg)["(Intercept)"] xb.xb <- xb.xb + (coef(my.reg)["x"] - coef(bs.reg)["x"])*x_Np1 return(unname(xb.xb + sample(bs.s,size=1))) } # Do bootstrap with 10,000 replications ep.draws <- replicate(n=10000,the.replication(reg=my.reg,s=my.s.resid,x_Np1=78)) # Create prediction interval y.p+quantile(ep.draws,probs=c(0.05,0.95)) # prediction interval using normal assumption predict(my.reg,newdata=data.frame(x=78),interval="prediction",level=0.90) # Quick and dirty Monte Carlo to see which prediction interval is better # That is, what are the 5th and 95th percentiles of Y_{N+1} # # To do it properly, I guess we would want to do the whole procedure above # 10,000 times and then see what percentage of the time each prediction # interval covered Y_{N+1} y.np1 <- 1 + 78 + (rexp(n=10000,rate=0.25)-4) quantile(y.np1,probs=c(0.05,0.95))
Bootstrap prediction interval The method laid out below is the one described in Section 6.3.3 of Davidson and Hinckley (1997), Bootstrap Methods and Their Application. Thanks to Glen_b and his comment here. Given that there were
7,258
Bootstrap prediction interval
Consider the much simpler solution than the excelent answer offered by Bill, that following the model based resampling of Sections 6.2.3 and 6.3.3 of Davidson and Hinckley (1997), Bootstrap Methods and Their Application consider X as fixed by design. Simply add sample(resid(fit.b), size = 1) to the prediction line in STEP 3, this will add the necessary variability to the prediction to account for uncertainty in the irreducible error: # STEP 1: GENERATE DATA set.seed(34345) n <- 100 x <- runif(n) y <- 1 + 0.2*x + rnorm(n) data <- data.frame(x, y) # STEP 2: COMPUTE CLASSIC 95%-PREDICTION INTERVAL fit <- lm(y ~ x) # Classic prediction interval based on standard error of forecast predict(fit, list(x = 0.1), interval = "p") # -0.6588168 3.093755 # Classic confidence interval based on standard error of estimation predict(fit, list(x = 0.1), interval = "c") # 0.893388 1.54155 # STEP 3: NOW BY BOOTSTRAP 95%-PREDICTION INTERVAL B <- 1000 pred <- numeric(B) for (i in 1:B) { boot <- sample(n, n, replace = TRUE) fit.b <- lm(y ~ x, data = data[boot,]) pred[i] <- predict(fit.b, list(x = 0.1)) + sample(resid(fit.b), size = 1) } quantile(pred, c(0.025, 0.975)) # -0.5976346 3.0901755 This 95% bootstrap interval matches the 95% prediction interval, and this can be generalized quite easily to other more general models.
Bootstrap prediction interval
Consider the much simpler solution than the excelent answer offered by Bill, that following the model based resampling of Sections 6.2.3 and 6.3.3 of Davidson and Hinckley (1997), Bootstrap Methods an
Bootstrap prediction interval Consider the much simpler solution than the excelent answer offered by Bill, that following the model based resampling of Sections 6.2.3 and 6.3.3 of Davidson and Hinckley (1997), Bootstrap Methods and Their Application consider X as fixed by design. Simply add sample(resid(fit.b), size = 1) to the prediction line in STEP 3, this will add the necessary variability to the prediction to account for uncertainty in the irreducible error: # STEP 1: GENERATE DATA set.seed(34345) n <- 100 x <- runif(n) y <- 1 + 0.2*x + rnorm(n) data <- data.frame(x, y) # STEP 2: COMPUTE CLASSIC 95%-PREDICTION INTERVAL fit <- lm(y ~ x) # Classic prediction interval based on standard error of forecast predict(fit, list(x = 0.1), interval = "p") # -0.6588168 3.093755 # Classic confidence interval based on standard error of estimation predict(fit, list(x = 0.1), interval = "c") # 0.893388 1.54155 # STEP 3: NOW BY BOOTSTRAP 95%-PREDICTION INTERVAL B <- 1000 pred <- numeric(B) for (i in 1:B) { boot <- sample(n, n, replace = TRUE) fit.b <- lm(y ~ x, data = data[boot,]) pred[i] <- predict(fit.b, list(x = 0.1)) + sample(resid(fit.b), size = 1) } quantile(pred, c(0.025, 0.975)) # -0.5976346 3.0901755 This 95% bootstrap interval matches the 95% prediction interval, and this can be generalized quite easily to other more general models.
Bootstrap prediction interval Consider the much simpler solution than the excelent answer offered by Bill, that following the model based resampling of Sections 6.2.3 and 6.3.3 of Davidson and Hinckley (1997), Bootstrap Methods an
7,259
How to use both binary and continuous variables together in clustering?
You are right that k-means clustering should not be done with data of mixed types. Since k-means is essentially a simple search algorithm to find a partition that minimizes the within-cluster squared Euclidean distances between the clustered observations and the cluster centroid, it should only be used with data where squared Euclidean distances would be meaningful. When your data consist of variables of mixed types, you need to use Gower's distance. CV user @ttnphns has a great overview of Gower's distance here. In essence, you compute a distance matrix for your rows for each variable in turn, using a type of distance that is appropriate for that type of variable (e.g., Euclidean for continuous data, etc.); the final distance of row $i$ to $i'$ is the (possibly weighted) average of the distances for each variable. One thing to be aware of is that Gower's distance isn't actually a metric. Nonetheless, with mixed data, Gower's distance is largely the only game in town. At this point, you can use any clustering method that can operate over a distance matrix instead of needing the original data matrix. (Note that k-means needs the latter.) The most popular choices are partitioning around medoids (PAM, which is essentially the same as k-means, but uses the most central observation rather than the centroid), various hierarchical clustering approaches (e.g., median, single-linkage, and complete-linkage; with hierarchical clustering you will need to decide where to 'cut the tree' to get the final cluster assignments), and DBSCAN which allows much more flexible cluster shapes. Here is a simple R demo (n.b., there are actually 3 clusters, but the data mostly look like 2 clusters are appropriate): library(cluster) # we'll use these packages library(fpc) # here we're generating 45 data in 3 clusters: set.seed(3296) # this makes the example exactly reproducible n = 15 cont = c(rnorm(n, mean=0, sd=1), rnorm(n, mean=1, sd=1), rnorm(n, mean=2, sd=1) ) bin = c(rbinom(n, size=1, prob=.2), rbinom(n, size=1, prob=.5), rbinom(n, size=1, prob=.8) ) ord = c(rbinom(n, size=5, prob=.2), rbinom(n, size=5, prob=.5), rbinom(n, size=5, prob=.8) ) data = data.frame(cont=cont, bin=bin, ord=factor(ord, ordered=TRUE)) # this returns the distance matrix with Gower's distance: g.dist = daisy(data, metric="gower", type=list(symm=2)) We can start by searching over different numbers of clusters with PAM: # we can start by searching over different numbers of clusters with PAM: pc = pamk(g.dist, krange=1:5, criterion="asw") pc[2:3] # $nc # [1] 2 # 2 clusters maximize the average silhouette width # # $crit # [1] 0.0000000 0.6227580 0.5593053 0.5011497 0.4294626 pc = pc$pamobject; pc # this is the optimal PAM clustering # Medoids: # ID # [1,] "29" "29" # [2,] "33" "33" # Clustering vector: # 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 # 1 1 1 1 1 2 1 1 1 1 1 2 1 2 1 2 2 1 1 1 2 1 2 1 2 2 # 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 # 1 2 1 2 2 1 2 2 2 2 1 2 1 2 2 2 2 2 2 # Objective function: # build swap # 0.1500934 0.1461762 # # Available components: # [1] "medoids" "id.med" "clustering" "objective" "isolation" # [6] "clusinfo" "silinfo" "diss" "call" Those results can be compared to the results of hierarchical clustering: hc.m = hclust(g.dist, method="median") hc.s = hclust(g.dist, method="single") hc.c = hclust(g.dist, method="complete") windows(height=3.5, width=9) layout(matrix(1:3, nrow=1)) plot(hc.m) plot(hc.s) plot(hc.c) The median method suggests 2 (possibly 3) clusters, the single only supports 2, but the complete method could suggest 2, 3 or 4 to my eye. Finally, we can try DBSCAN. This requires specifying two parameters: eps, the 'reachability distance' (how close two observations have to be to be linked together) and minPts (the minimum number of points that need to be connected to each other before you are willing to call them a 'cluster'). A rule of thumb for minPts is to use one more than the number of dimensions (in our case 3+1=4), but having a number that's too small isn't recommended. The default value for dbscan is 5; we'll stick with that. One way to think about the reachability distance is to see what percent of the distances are less than any given value. We can do that by examining the distribution of the distances: windows() layout(matrix(1:2, nrow=1)) plot(density(na.omit(g.dist[upper.tri(g.dist)])), main="kernel density") plot(ecdf(g.dist[upper.tri(g.dist)]), main="ECDF") The distances themselves seem to cluster into visually discernible groups of 'nearer' and 'further away'. A value of .3 seems to most cleanly distinguish between the two groups of distances. To explore the sensitivity of the output to different choices of eps, we can try .2 and .4 as well: dbc3 = dbscan(g.dist, eps=.3, MinPts=5, method="dist"); dbc3 # dbscan Pts=45 MinPts=5 eps=0.3 # 1 2 # seed 22 23 # total 22 23 dbc2 = dbscan(g.dist, eps=.2, MinPts=5, method="dist"); dbc2 # dbscan Pts=45 MinPts=5 eps=0.2 # 1 2 # border 2 1 # seed 20 22 # total 22 23 dbc4 = dbscan(g.dist, eps=.4, MinPts=5, method="dist"); dbc4 # dbscan Pts=45 MinPts=5 eps=0.4 # 1 # seed 45 # total 45 Using eps=.3 does give a very clean solution, which (qualitatively at least) agrees with what we saw from other methods above. Since there is no meaningful cluster 1-ness, we should be careful of trying to match which observations are called 'cluster 1' from different clusterings. Instead, we can form tables and if most of the observations called 'cluster 1' in one fit are called 'cluster 2' in another, we would see that the results are still substantively similar. In our case, the different clusterings are mostly very stable and put the same observations in the same clusters each time; only the complete linkage hierarchical clustering differs: # comparing the clusterings table(cutree(hc.m, k=2), cutree(hc.s, k=2)) # 1 2 # 1 22 0 # 2 0 23 table(cutree(hc.m, k=2), pc$clustering) # 1 2 # 1 22 0 # 2 0 23 table(pc$clustering, dbc3$cluster) # 1 2 # 1 22 0 # 2 0 23 table(cutree(hc.m, k=2), cutree(hc.c, k=2)) # 1 2 # 1 14 8 # 2 7 16 Of course, there is no guarantee that any cluster analysis will recover the true latent clusters in your data. The absence of the true cluster labels (which would be available in, say, a logistic regression situation) means that an enormous amount of information is unavailable. Even with very large datasets, the clusters may not be sufficiently well separated to be perfectly recoverable. In our case, since we know the true cluster membership, we can compare that to the output to see how well it did. As I noted above, there are actually 3 latent clusters, but the data give the appearance of 2 clusters instead: pc$clustering[1:15] # these were actually cluster 1 in the data generating process # 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 # 1 1 1 1 1 2 1 1 1 1 1 2 1 2 1 pc$clustering[16:30] # these were actually cluster 2 in the data generating process # 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 # 2 2 1 1 1 2 1 2 1 2 2 1 2 1 2 pc$clustering[31:45] # these were actually cluster 3 in the data generating process # 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 # 2 1 2 2 2 2 1 2 1 2 2 2 2 2 2
How to use both binary and continuous variables together in clustering?
You are right that k-means clustering should not be done with data of mixed types. Since k-means is essentially a simple search algorithm to find a partition that minimizes the within-cluster squared
How to use both binary and continuous variables together in clustering? You are right that k-means clustering should not be done with data of mixed types. Since k-means is essentially a simple search algorithm to find a partition that minimizes the within-cluster squared Euclidean distances between the clustered observations and the cluster centroid, it should only be used with data where squared Euclidean distances would be meaningful. When your data consist of variables of mixed types, you need to use Gower's distance. CV user @ttnphns has a great overview of Gower's distance here. In essence, you compute a distance matrix for your rows for each variable in turn, using a type of distance that is appropriate for that type of variable (e.g., Euclidean for continuous data, etc.); the final distance of row $i$ to $i'$ is the (possibly weighted) average of the distances for each variable. One thing to be aware of is that Gower's distance isn't actually a metric. Nonetheless, with mixed data, Gower's distance is largely the only game in town. At this point, you can use any clustering method that can operate over a distance matrix instead of needing the original data matrix. (Note that k-means needs the latter.) The most popular choices are partitioning around medoids (PAM, which is essentially the same as k-means, but uses the most central observation rather than the centroid), various hierarchical clustering approaches (e.g., median, single-linkage, and complete-linkage; with hierarchical clustering you will need to decide where to 'cut the tree' to get the final cluster assignments), and DBSCAN which allows much more flexible cluster shapes. Here is a simple R demo (n.b., there are actually 3 clusters, but the data mostly look like 2 clusters are appropriate): library(cluster) # we'll use these packages library(fpc) # here we're generating 45 data in 3 clusters: set.seed(3296) # this makes the example exactly reproducible n = 15 cont = c(rnorm(n, mean=0, sd=1), rnorm(n, mean=1, sd=1), rnorm(n, mean=2, sd=1) ) bin = c(rbinom(n, size=1, prob=.2), rbinom(n, size=1, prob=.5), rbinom(n, size=1, prob=.8) ) ord = c(rbinom(n, size=5, prob=.2), rbinom(n, size=5, prob=.5), rbinom(n, size=5, prob=.8) ) data = data.frame(cont=cont, bin=bin, ord=factor(ord, ordered=TRUE)) # this returns the distance matrix with Gower's distance: g.dist = daisy(data, metric="gower", type=list(symm=2)) We can start by searching over different numbers of clusters with PAM: # we can start by searching over different numbers of clusters with PAM: pc = pamk(g.dist, krange=1:5, criterion="asw") pc[2:3] # $nc # [1] 2 # 2 clusters maximize the average silhouette width # # $crit # [1] 0.0000000 0.6227580 0.5593053 0.5011497 0.4294626 pc = pc$pamobject; pc # this is the optimal PAM clustering # Medoids: # ID # [1,] "29" "29" # [2,] "33" "33" # Clustering vector: # 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 # 1 1 1 1 1 2 1 1 1 1 1 2 1 2 1 2 2 1 1 1 2 1 2 1 2 2 # 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 # 1 2 1 2 2 1 2 2 2 2 1 2 1 2 2 2 2 2 2 # Objective function: # build swap # 0.1500934 0.1461762 # # Available components: # [1] "medoids" "id.med" "clustering" "objective" "isolation" # [6] "clusinfo" "silinfo" "diss" "call" Those results can be compared to the results of hierarchical clustering: hc.m = hclust(g.dist, method="median") hc.s = hclust(g.dist, method="single") hc.c = hclust(g.dist, method="complete") windows(height=3.5, width=9) layout(matrix(1:3, nrow=1)) plot(hc.m) plot(hc.s) plot(hc.c) The median method suggests 2 (possibly 3) clusters, the single only supports 2, but the complete method could suggest 2, 3 or 4 to my eye. Finally, we can try DBSCAN. This requires specifying two parameters: eps, the 'reachability distance' (how close two observations have to be to be linked together) and minPts (the minimum number of points that need to be connected to each other before you are willing to call them a 'cluster'). A rule of thumb for minPts is to use one more than the number of dimensions (in our case 3+1=4), but having a number that's too small isn't recommended. The default value for dbscan is 5; we'll stick with that. One way to think about the reachability distance is to see what percent of the distances are less than any given value. We can do that by examining the distribution of the distances: windows() layout(matrix(1:2, nrow=1)) plot(density(na.omit(g.dist[upper.tri(g.dist)])), main="kernel density") plot(ecdf(g.dist[upper.tri(g.dist)]), main="ECDF") The distances themselves seem to cluster into visually discernible groups of 'nearer' and 'further away'. A value of .3 seems to most cleanly distinguish between the two groups of distances. To explore the sensitivity of the output to different choices of eps, we can try .2 and .4 as well: dbc3 = dbscan(g.dist, eps=.3, MinPts=5, method="dist"); dbc3 # dbscan Pts=45 MinPts=5 eps=0.3 # 1 2 # seed 22 23 # total 22 23 dbc2 = dbscan(g.dist, eps=.2, MinPts=5, method="dist"); dbc2 # dbscan Pts=45 MinPts=5 eps=0.2 # 1 2 # border 2 1 # seed 20 22 # total 22 23 dbc4 = dbscan(g.dist, eps=.4, MinPts=5, method="dist"); dbc4 # dbscan Pts=45 MinPts=5 eps=0.4 # 1 # seed 45 # total 45 Using eps=.3 does give a very clean solution, which (qualitatively at least) agrees with what we saw from other methods above. Since there is no meaningful cluster 1-ness, we should be careful of trying to match which observations are called 'cluster 1' from different clusterings. Instead, we can form tables and if most of the observations called 'cluster 1' in one fit are called 'cluster 2' in another, we would see that the results are still substantively similar. In our case, the different clusterings are mostly very stable and put the same observations in the same clusters each time; only the complete linkage hierarchical clustering differs: # comparing the clusterings table(cutree(hc.m, k=2), cutree(hc.s, k=2)) # 1 2 # 1 22 0 # 2 0 23 table(cutree(hc.m, k=2), pc$clustering) # 1 2 # 1 22 0 # 2 0 23 table(pc$clustering, dbc3$cluster) # 1 2 # 1 22 0 # 2 0 23 table(cutree(hc.m, k=2), cutree(hc.c, k=2)) # 1 2 # 1 14 8 # 2 7 16 Of course, there is no guarantee that any cluster analysis will recover the true latent clusters in your data. The absence of the true cluster labels (which would be available in, say, a logistic regression situation) means that an enormous amount of information is unavailable. Even with very large datasets, the clusters may not be sufficiently well separated to be perfectly recoverable. In our case, since we know the true cluster membership, we can compare that to the output to see how well it did. As I noted above, there are actually 3 latent clusters, but the data give the appearance of 2 clusters instead: pc$clustering[1:15] # these were actually cluster 1 in the data generating process # 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 # 1 1 1 1 1 2 1 1 1 1 1 2 1 2 1 pc$clustering[16:30] # these were actually cluster 2 in the data generating process # 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 # 2 2 1 1 1 2 1 2 1 2 2 1 2 1 2 pc$clustering[31:45] # these were actually cluster 3 in the data generating process # 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 # 2 1 2 2 2 2 1 2 1 2 2 2 2 2 2
How to use both binary and continuous variables together in clustering? You are right that k-means clustering should not be done with data of mixed types. Since k-means is essentially a simple search algorithm to find a partition that minimizes the within-cluster squared
7,260
How to use both binary and continuous variables together in clustering?
Look at this paper by Finch, http://www.jds-online.com/files/JDS-192.pdf. It describes both why applying continuous methods to binary data may inaccurately cluster the data, and more importantly what are some choices in appropriate distance functions. It does not answer how to cluster with k-means, but rather how to properly cluster binary data using non-Euclidean metrics and a hierarchical method like Ward.
How to use both binary and continuous variables together in clustering?
Look at this paper by Finch, http://www.jds-online.com/files/JDS-192.pdf. It describes both why applying continuous methods to binary data may inaccurately cluster the data, and more importantly what
How to use both binary and continuous variables together in clustering? Look at this paper by Finch, http://www.jds-online.com/files/JDS-192.pdf. It describes both why applying continuous methods to binary data may inaccurately cluster the data, and more importantly what are some choices in appropriate distance functions. It does not answer how to cluster with k-means, but rather how to properly cluster binary data using non-Euclidean metrics and a hierarchical method like Ward.
How to use both binary and continuous variables together in clustering? Look at this paper by Finch, http://www.jds-online.com/files/JDS-192.pdf. It describes both why applying continuous methods to binary data may inaccurately cluster the data, and more importantly what
7,261
Internal vs external cross-validation and model selection
Let me add a few points to the nice answers that are already here: Nested K-fold vs repeated K-fold: nested and repeated k-fold are totally different things, used for different purposes. As you already know, nested is good if you want to use the inner cv for model selection. repeated: IMHO you should always repeat the k-fold cv [see below]. I therefore recommend to repeat any nested k-fold cross validation. Better report "The statistics of our estimator, e.g. its confidence interval, variance, mean, etc. on the full sample (in this case the CV sample).": Sure. However, you need to be aware of the fact that you will not (easily) be able to estimate the confidence interval by the cross validation results alone. The reason is that, however much you resample, the actual number of cases you look at is finite (and usually rather small - otherwise you'd not bother about these distinctions). See e.g. Bengio, Y. and Grandvalet, Y.: No Unbiased Estimator of the Variance of K-Fold Cross-Validation Journal of Machine Learning Research, 2004, 5, 1089-1105. However, in some situations you can nevertheless make estimations of the variance: With repeated k-fold cross validation, you can get an idea whether model instability does play a role. And this instability-related variance is actually the part of the variance that you can reduce by repeated cross-validation. (If your models are perfectly stable, each repetition/iteration of the cross validation will have exactly the same predictions for each case. However, you still have variance due to the actual choice/composition of your data set). So there is a limit to the lower variance of repeated k-fold cross validation. Doing more and more repetitions/iterations does not make sense, as the variance caused by the fact that in the end only $n$ real cases were tested is not affected. The variance caused by the fact that in the end only $n$ real cases were tested can be estimated for some special cases, e.g. the performance of classifiers as measured by proportions such as hit rate, error rate, sensitivity, specificity, predictive values and so on: they follow binomial distributions Unfortunately, this means that they have huge variance $\sigma^2 (\hat p) = \frac{1}{n} p (1 - p)$ with $p$ the true performance value of the model, $\hat p$ the observed, and $n$ the sample size in the denominator of the fraction. This has the maximum for $p = 0.5$. You can also calculate confidence intervals starting from the observation. (@Frank Harrell will comment that these are no proper scoring rules, so you anyways shouldn't use them - which is related to the huge variance). However, IMHO they are useful for deriving conservative bounds (there are better scoring rules, and the bad behaviour of these fractions is a worst-case limit for the better rules), see e.g. C. Beleites, R. Salzer and V. Sergo: Validation of Soft Classification Models using Partial Class Memberships: An Extended Concept of Sensitivity & Co. applied to Grading of Astrocytoma Tissues, Chemom. Intell. Lab. Syst., 122 (2013), 12 - 22. So this lets me turn around your argumentation against the hold-out: Neither does resampling alone (necessarily) give you a good estimate of the variance, OTOH, if you can reason about the finite-test-sample-size-variance of the cross validation estimate, that is also possible for hold out. Our estimator for this single measurement would have been trained on a set (e.g. the CV set) that is smaller than our initial sample since we have to make room for the hold-out set. This results in a more biased (pessimistic) estimation in P1 . Not necessarily (if compared to k-fold) - but you have to trade off: small hold-out set (e.g. $\frac{1}{k}$ of the sample => low bias (≈ same as k-fold cv), high variance (> k-fold cv, roughly by a factor of k). It looks to me that reporting on the hold-out test set is bad practice since the analysis of the CV sample is more informative. Usually, yes. However, it is also good to keep in mind that there are important types of errors (such as drift) that cannot be measured/detected by resampling validation. See e.g. Esbensen, K. H. and Geladi, P. Principles of Proper Validation: use and abuse of re-sampling for validation, Journal of Chemometrics, 2010, 24, 168-187 but it looks to me that for the same number of total models trained (total # of folds) repeated K-fold would yield estimators that are less biased and more accurate than nested K-fold. To see this: Repeated K-fold uses a larger fraction of our total sample than nested K-fold for the same K (i.e. it leads to lower bias) I'd say no to this: it doesn't matter how the model training uses its $\frac{k - 1}{k} n$ training samples, as long as the surrogate models and the "real" model use them in the same way. (I look at the inner cross-validation / estimation of hyper-parameters as part of the model set-up). Things look different if you compare surrogate models which are trained including hyper-parameter optimization to "the" model which is trained on fixed hyper-parameters. But IMHO that is generalizing from $k$ apples to 1 orange. 100 iterations would only give 10 measurements of our estimator in nested K-fold (K=10), but 100 measurements in K-fold (more measurements leads to lower variance in P2) Whether this does make a difference depends on the instability of the (surrogate) models, see above. For stable models it is irrelevant. So may be whether you do 1000 or 100 outer repetitions/iterations. And this paper definitively belongs onto the reading list on this topic: Cawley, G. C. and Talbot, N. L. C. On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation, Journal of Machine Learning Research, 2010, 11, 2079-2107
Internal vs external cross-validation and model selection
Let me add a few points to the nice answers that are already here: Nested K-fold vs repeated K-fold: nested and repeated k-fold are totally different things, used for different purposes. As you alre
Internal vs external cross-validation and model selection Let me add a few points to the nice answers that are already here: Nested K-fold vs repeated K-fold: nested and repeated k-fold are totally different things, used for different purposes. As you already know, nested is good if you want to use the inner cv for model selection. repeated: IMHO you should always repeat the k-fold cv [see below]. I therefore recommend to repeat any nested k-fold cross validation. Better report "The statistics of our estimator, e.g. its confidence interval, variance, mean, etc. on the full sample (in this case the CV sample).": Sure. However, you need to be aware of the fact that you will not (easily) be able to estimate the confidence interval by the cross validation results alone. The reason is that, however much you resample, the actual number of cases you look at is finite (and usually rather small - otherwise you'd not bother about these distinctions). See e.g. Bengio, Y. and Grandvalet, Y.: No Unbiased Estimator of the Variance of K-Fold Cross-Validation Journal of Machine Learning Research, 2004, 5, 1089-1105. However, in some situations you can nevertheless make estimations of the variance: With repeated k-fold cross validation, you can get an idea whether model instability does play a role. And this instability-related variance is actually the part of the variance that you can reduce by repeated cross-validation. (If your models are perfectly stable, each repetition/iteration of the cross validation will have exactly the same predictions for each case. However, you still have variance due to the actual choice/composition of your data set). So there is a limit to the lower variance of repeated k-fold cross validation. Doing more and more repetitions/iterations does not make sense, as the variance caused by the fact that in the end only $n$ real cases were tested is not affected. The variance caused by the fact that in the end only $n$ real cases were tested can be estimated for some special cases, e.g. the performance of classifiers as measured by proportions such as hit rate, error rate, sensitivity, specificity, predictive values and so on: they follow binomial distributions Unfortunately, this means that they have huge variance $\sigma^2 (\hat p) = \frac{1}{n} p (1 - p)$ with $p$ the true performance value of the model, $\hat p$ the observed, and $n$ the sample size in the denominator of the fraction. This has the maximum for $p = 0.5$. You can also calculate confidence intervals starting from the observation. (@Frank Harrell will comment that these are no proper scoring rules, so you anyways shouldn't use them - which is related to the huge variance). However, IMHO they are useful for deriving conservative bounds (there are better scoring rules, and the bad behaviour of these fractions is a worst-case limit for the better rules), see e.g. C. Beleites, R. Salzer and V. Sergo: Validation of Soft Classification Models using Partial Class Memberships: An Extended Concept of Sensitivity & Co. applied to Grading of Astrocytoma Tissues, Chemom. Intell. Lab. Syst., 122 (2013), 12 - 22. So this lets me turn around your argumentation against the hold-out: Neither does resampling alone (necessarily) give you a good estimate of the variance, OTOH, if you can reason about the finite-test-sample-size-variance of the cross validation estimate, that is also possible for hold out. Our estimator for this single measurement would have been trained on a set (e.g. the CV set) that is smaller than our initial sample since we have to make room for the hold-out set. This results in a more biased (pessimistic) estimation in P1 . Not necessarily (if compared to k-fold) - but you have to trade off: small hold-out set (e.g. $\frac{1}{k}$ of the sample => low bias (≈ same as k-fold cv), high variance (> k-fold cv, roughly by a factor of k). It looks to me that reporting on the hold-out test set is bad practice since the analysis of the CV sample is more informative. Usually, yes. However, it is also good to keep in mind that there are important types of errors (such as drift) that cannot be measured/detected by resampling validation. See e.g. Esbensen, K. H. and Geladi, P. Principles of Proper Validation: use and abuse of re-sampling for validation, Journal of Chemometrics, 2010, 24, 168-187 but it looks to me that for the same number of total models trained (total # of folds) repeated K-fold would yield estimators that are less biased and more accurate than nested K-fold. To see this: Repeated K-fold uses a larger fraction of our total sample than nested K-fold for the same K (i.e. it leads to lower bias) I'd say no to this: it doesn't matter how the model training uses its $\frac{k - 1}{k} n$ training samples, as long as the surrogate models and the "real" model use them in the same way. (I look at the inner cross-validation / estimation of hyper-parameters as part of the model set-up). Things look different if you compare surrogate models which are trained including hyper-parameter optimization to "the" model which is trained on fixed hyper-parameters. But IMHO that is generalizing from $k$ apples to 1 orange. 100 iterations would only give 10 measurements of our estimator in nested K-fold (K=10), but 100 measurements in K-fold (more measurements leads to lower variance in P2) Whether this does make a difference depends on the instability of the (surrogate) models, see above. For stable models it is irrelevant. So may be whether you do 1000 or 100 outer repetitions/iterations. And this paper definitively belongs onto the reading list on this topic: Cawley, G. C. and Talbot, N. L. C. On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation, Journal of Machine Learning Research, 2010, 11, 2079-2107
Internal vs external cross-validation and model selection Let me add a few points to the nice answers that are already here: Nested K-fold vs repeated K-fold: nested and repeated k-fold are totally different things, used for different purposes. As you alre
7,262
Internal vs external cross-validation and model selection
A key reference explaining this is: @ARTICLE{pic90, author = {Picard, R. R. and Berk, K. N.}, year = 1990, title = {Data splitting}, journal = The American Statistician, volume = 44, pages = {140-147} } See also: @Article{mic05pre, author = {Michiels, Stefan and Koscielny, Serge and Hill, Catherine}, title = {Prediction of cancer outcome with microarrays: a multiple random validation strategy}, journal = {Lancet}, year = 2005, volume = 365, pages = {488-492}, annote = {comment on p. 454; validation;microarray;bioinformatics;machine learning;nearest centroid;severe problems with data splitting;high variability of list of genes;problems with published studies;nice results for effect of training sample size on misclassification error;nice use of confidence intervals on accuracy estimates;unstable molecular signatures;high instability due to dependence on selection of training sample} } In my own work I've found that data splitting requires training and test sample sizes approaching 10,000 to work satisfactorily.
Internal vs external cross-validation and model selection
A key reference explaining this is: @ARTICLE{pic90, author = {Picard, R. R. and Berk, K. N.}, year = 1990, title = {Data splitting}, journal = The American Statistician, volume = 44, pages
Internal vs external cross-validation and model selection A key reference explaining this is: @ARTICLE{pic90, author = {Picard, R. R. and Berk, K. N.}, year = 1990, title = {Data splitting}, journal = The American Statistician, volume = 44, pages = {140-147} } See also: @Article{mic05pre, author = {Michiels, Stefan and Koscielny, Serge and Hill, Catherine}, title = {Prediction of cancer outcome with microarrays: a multiple random validation strategy}, journal = {Lancet}, year = 2005, volume = 365, pages = {488-492}, annote = {comment on p. 454; validation;microarray;bioinformatics;machine learning;nearest centroid;severe problems with data splitting;high variability of list of genes;problems with published studies;nice results for effect of training sample size on misclassification error;nice use of confidence intervals on accuracy estimates;unstable molecular signatures;high instability due to dependence on selection of training sample} } In my own work I've found that data splitting requires training and test sample sizes approaching 10,000 to work satisfactorily.
Internal vs external cross-validation and model selection A key reference explaining this is: @ARTICLE{pic90, author = {Picard, R. R. and Berk, K. N.}, year = 1990, title = {Data splitting}, journal = The American Statistician, volume = 44, pages
7,263
Internal vs external cross-validation and model selection
It really depends on your model building process, but I found this paper helpful http://www.biomedcentral.com/content/pdf/1471-2105-7-91.pdf The crux of what is discussed here is the significant liberal bias (estimating model performance to be better than it will actually be) that will occur if you are selecting your model based on the same thing that you are using to estimate its performance. So, if you are selecting your model from a set of possible models by looking at its cross validation error, you should not use cross validation error (or any other internal estimation method) to estimate the model performance. Another useful resource is https://stats.stackexchange.com/a/27751/26589 This post lays out a clear example of how selecting your features when all the data is "seen" will lead to a liberal bias in model performance (saying your model will perform better than it actually will). If you would like me to lay out an example that is more specific to what you do, maybe you could give a general description of the types of models you're building (how much data you have, how many features your selecting from, the actual model, etc.).
Internal vs external cross-validation and model selection
It really depends on your model building process, but I found this paper helpful http://www.biomedcentral.com/content/pdf/1471-2105-7-91.pdf The crux of what is discussed here is the significant liber
Internal vs external cross-validation and model selection It really depends on your model building process, but I found this paper helpful http://www.biomedcentral.com/content/pdf/1471-2105-7-91.pdf The crux of what is discussed here is the significant liberal bias (estimating model performance to be better than it will actually be) that will occur if you are selecting your model based on the same thing that you are using to estimate its performance. So, if you are selecting your model from a set of possible models by looking at its cross validation error, you should not use cross validation error (or any other internal estimation method) to estimate the model performance. Another useful resource is https://stats.stackexchange.com/a/27751/26589 This post lays out a clear example of how selecting your features when all the data is "seen" will lead to a liberal bias in model performance (saying your model will perform better than it actually will). If you would like me to lay out an example that is more specific to what you do, maybe you could give a general description of the types of models you're building (how much data you have, how many features your selecting from, the actual model, etc.).
Internal vs external cross-validation and model selection It really depends on your model building process, but I found this paper helpful http://www.biomedcentral.com/content/pdf/1471-2105-7-91.pdf The crux of what is discussed here is the significant liber
7,264
Internal vs external cross-validation and model selection
I think your understanding is correct, the estimator for loss obtained by using a single hold-out test set usually has high variance. By performing something like K-folds cross validation you obtain a more accurate idea of the loss, as well as sense of distribution of the loss. There is usually a tradeoff, the more CV folds the better your estimate, but more computational time is needed.
Internal vs external cross-validation and model selection
I think your understanding is correct, the estimator for loss obtained by using a single hold-out test set usually has high variance. By performing something like K-folds cross validation you obtain a
Internal vs external cross-validation and model selection I think your understanding is correct, the estimator for loss obtained by using a single hold-out test set usually has high variance. By performing something like K-folds cross validation you obtain a more accurate idea of the loss, as well as sense of distribution of the loss. There is usually a tradeoff, the more CV folds the better your estimate, but more computational time is needed.
Internal vs external cross-validation and model selection I think your understanding is correct, the estimator for loss obtained by using a single hold-out test set usually has high variance. By performing something like K-folds cross validation you obtain a
7,265
Experimental evidence supporting Tufte-style visualizations?
The literature is vast. Experimental evidence is abundant but incomplete. For an introduction that focuses on the psychological and semiotic investigations, see Alan M. MacEachren, How Maps Work (1995; 2004 in paperback). Jump directly to chapter 9 (near the end) and then work backwards through any preliminary material that interests you. The bibliography is extensive (over 400 documents) but is getting a little long in the tooth. Although the title suggests a focus on cartography, most of the book is relevant to how humans create meaning out of and interpret graphical information. Don't expect to get a definitive answer out of any amount of such research. Remember that Tufte, Cleveland, and others were primarily focused on creating graphics that enable (above all) accurate, insightful communication of and interpretation of data. Other graphics artists and researchers have other aims, such as influencing people, creating effective propaganda, simplifying complex datasets, and expressing their artistic sensibilities within a graphical medium. These are almost diametrically opposed to the first set of objectives, whence the hugely differing approaches and recommendations you will find. Given this, I think a review of Cleveland's research should be sufficiently convincing that many of Tufte's design recommendations have decent experimental justification. These include his use of the Lie Factor, the Data-Ink Ratio, small multiples, and chartjunk for critically evaluating and designing statistical graphics.
Experimental evidence supporting Tufte-style visualizations?
The literature is vast. Experimental evidence is abundant but incomplete. For an introduction that focuses on the psychological and semiotic investigations, see Alan M. MacEachren, How Maps Work (19
Experimental evidence supporting Tufte-style visualizations? The literature is vast. Experimental evidence is abundant but incomplete. For an introduction that focuses on the psychological and semiotic investigations, see Alan M. MacEachren, How Maps Work (1995; 2004 in paperback). Jump directly to chapter 9 (near the end) and then work backwards through any preliminary material that interests you. The bibliography is extensive (over 400 documents) but is getting a little long in the tooth. Although the title suggests a focus on cartography, most of the book is relevant to how humans create meaning out of and interpret graphical information. Don't expect to get a definitive answer out of any amount of such research. Remember that Tufte, Cleveland, and others were primarily focused on creating graphics that enable (above all) accurate, insightful communication of and interpretation of data. Other graphics artists and researchers have other aims, such as influencing people, creating effective propaganda, simplifying complex datasets, and expressing their artistic sensibilities within a graphical medium. These are almost diametrically opposed to the first set of objectives, whence the hugely differing approaches and recommendations you will find. Given this, I think a review of Cleveland's research should be sufficiently convincing that many of Tufte's design recommendations have decent experimental justification. These include his use of the Lie Factor, the Data-Ink Ratio, small multiples, and chartjunk for critically evaluating and designing statistical graphics.
Experimental evidence supporting Tufte-style visualizations? The literature is vast. Experimental evidence is abundant but incomplete. For an introduction that focuses on the psychological and semiotic investigations, see Alan M. MacEachren, How Maps Work (19
7,266
Experimental evidence supporting Tufte-style visualizations?
Here's some; Cleveland and McGill (1984, JASA) Graphical Perception: Theory, Experimentation, and Application to the Development of Graphical Methods Cleveland and McGill (1987, JRSSA) Graphical Perception: The Visual Decoding of Quantitative Information on Graphical Displays of Data Lewandowsky and Spence (1989) Discriminating Strata in Scatterplots Spence and Lewandowsky (1991) Displaying Proportions and Percentages Spence Kutlesa and Rose (1999) Using Color to Code Quantity in Spatial Displays Ask the Google for the full references
Experimental evidence supporting Tufte-style visualizations?
Here's some; Cleveland and McGill (1984, JASA) Graphical Perception: Theory, Experimentation, and Application to the Development of Graphical Methods Cleveland and McGill (1987, JRSSA) Graphical Perc
Experimental evidence supporting Tufte-style visualizations? Here's some; Cleveland and McGill (1984, JASA) Graphical Perception: Theory, Experimentation, and Application to the Development of Graphical Methods Cleveland and McGill (1987, JRSSA) Graphical Perception: The Visual Decoding of Quantitative Information on Graphical Displays of Data Lewandowsky and Spence (1989) Discriminating Strata in Scatterplots Spence and Lewandowsky (1991) Displaying Proportions and Percentages Spence Kutlesa and Rose (1999) Using Color to Code Quantity in Spatial Displays Ask the Google for the full references
Experimental evidence supporting Tufte-style visualizations? Here's some; Cleveland and McGill (1984, JASA) Graphical Perception: Theory, Experimentation, and Application to the Development of Graphical Methods Cleveland and McGill (1987, JRSSA) Graphical Perc
7,267
Experimental evidence supporting Tufte-style visualizations?
It's worth remembering that information visualisation isn't some island cut off from all other forms of visual communication. If you want to produce work based on evidence based princples, I'd argue it's best to look where the evidence is strongest. I've read specific research on data visualisation techniques, and general research in cognitive science and in general design research, and I find that thinking through how the more powerful, more thorough general research applies to each brief and each element used is often more effective and useful than trying to apply the narrowly applied field-specific research which often suffers from small samples, weak research techniques, narrow investigation and/or deeply ingrained assumptions. There are two excellent books I recommend as an introduction, one with the science as a starting point, one with general principles as a starting point, bringing in evidence: Vision Science by Steve Palmer. It's a beast, and as a student I nearly gave myself a back injury on the few occasions I was foolish enough to carry it in a backpack, but it's also possibly the best science textbook I've ever seen, and a great example of crisp visual and verbal communciation itself. I went through it recently to label out the chapters with content directly relevant to my work in visualisation and information design, expecting to only label a few: I ended up labelling every chapter except one. Universal Principles of Design by Rockport Press. A very ambitious and useful book which crunches cognitive science research with case studies and examples from across all branches of design into a series of awesomely clear and straight to the point double-page spreads, each covering one established, evidence based and practical principle, with practical suggestions, worked examples and suggested further reading. Very stimulating, so long as you think of it as a list of tools with suggested uses not a list of rules. The only downside is, this approach takes more thinking to see how such principles are applicable. If you're looking for a list of arbitrary rules, as many in the data vis community seem to be, I'd say there isn't one and never will be except where people make massive unjustified assumptions and generalisations, or make things up. The better quality applied research is useful, but it helps to have a solid framework which it can slot into. Most of Tufte's general principles such as data-ink and chart-junk can be traced back to solid general principles such as signal-noise ratios, figure-ground, attenuation, and others - but on the route to becoming field-specific and prescriptive, they have been combined with hefty assumptions and generalisations about your objectives and audience that turn them into blunt tools. Many of the apparent contradictions and debates in the applied research aren't contradictions at all if you take a step back, take context into account and work through from the underlying core principles and the particular features of each case.
Experimental evidence supporting Tufte-style visualizations?
It's worth remembering that information visualisation isn't some island cut off from all other forms of visual communication. If you want to produce work based on evidence based princples, I'd argue i
Experimental evidence supporting Tufte-style visualizations? It's worth remembering that information visualisation isn't some island cut off from all other forms of visual communication. If you want to produce work based on evidence based princples, I'd argue it's best to look where the evidence is strongest. I've read specific research on data visualisation techniques, and general research in cognitive science and in general design research, and I find that thinking through how the more powerful, more thorough general research applies to each brief and each element used is often more effective and useful than trying to apply the narrowly applied field-specific research which often suffers from small samples, weak research techniques, narrow investigation and/or deeply ingrained assumptions. There are two excellent books I recommend as an introduction, one with the science as a starting point, one with general principles as a starting point, bringing in evidence: Vision Science by Steve Palmer. It's a beast, and as a student I nearly gave myself a back injury on the few occasions I was foolish enough to carry it in a backpack, but it's also possibly the best science textbook I've ever seen, and a great example of crisp visual and verbal communciation itself. I went through it recently to label out the chapters with content directly relevant to my work in visualisation and information design, expecting to only label a few: I ended up labelling every chapter except one. Universal Principles of Design by Rockport Press. A very ambitious and useful book which crunches cognitive science research with case studies and examples from across all branches of design into a series of awesomely clear and straight to the point double-page spreads, each covering one established, evidence based and practical principle, with practical suggestions, worked examples and suggested further reading. Very stimulating, so long as you think of it as a list of tools with suggested uses not a list of rules. The only downside is, this approach takes more thinking to see how such principles are applicable. If you're looking for a list of arbitrary rules, as many in the data vis community seem to be, I'd say there isn't one and never will be except where people make massive unjustified assumptions and generalisations, or make things up. The better quality applied research is useful, but it helps to have a solid framework which it can slot into. Most of Tufte's general principles such as data-ink and chart-junk can be traced back to solid general principles such as signal-noise ratios, figure-ground, attenuation, and others - but on the route to becoming field-specific and prescriptive, they have been combined with hefty assumptions and generalisations about your objectives and audience that turn them into blunt tools. Many of the apparent contradictions and debates in the applied research aren't contradictions at all if you take a step back, take context into account and work through from the underlying core principles and the particular features of each case.
Experimental evidence supporting Tufte-style visualizations? It's worth remembering that information visualisation isn't some island cut off from all other forms of visual communication. If you want to produce work based on evidence based princples, I'd argue i
7,268
Experimental evidence supporting Tufte-style visualizations?
There was one really good study in the field of cartography (Hegarty et al. (2009): Naïve cartography: How intuitions about display configuration can hurt performance. Published in: Cartographica The International Journal for Geographic Information and Geovisualization 44(3):171-186) It is especially interesting as the authors looked at a more complex task than simply reading values of a bar chart: Expert and novice users had to determine wind speeds and pressure gradients from a meterological map. Both groups of participants intuitively preferred a map with added relief shading and state borders (something Tufte would probably refer to as “chartjunk” as it is irrelevant to the task) against a more minimalistic map showing only the outline of America in the background. But even though there was such a strong personal preference for chartjunk, participants actually performed significantly worse using this map, both in terms of accuracy and response time. What I found particularly interesting about this study is that a complex use case (like meterology) is really common for us designers/cartographers/data analyst. Often times it is not just about some little bar chart, but we need to design entire dashboards, thematic maps, Sankey diagrams,... Cutting down on your chart junk does have improve your visualizations in this context a lot of times.
Experimental evidence supporting Tufte-style visualizations?
There was one really good study in the field of cartography (Hegarty et al. (2009): Naïve cartography: How intuitions about display configuration can hurt performance. Published in: Cartographica The
Experimental evidence supporting Tufte-style visualizations? There was one really good study in the field of cartography (Hegarty et al. (2009): Naïve cartography: How intuitions about display configuration can hurt performance. Published in: Cartographica The International Journal for Geographic Information and Geovisualization 44(3):171-186) It is especially interesting as the authors looked at a more complex task than simply reading values of a bar chart: Expert and novice users had to determine wind speeds and pressure gradients from a meterological map. Both groups of participants intuitively preferred a map with added relief shading and state borders (something Tufte would probably refer to as “chartjunk” as it is irrelevant to the task) against a more minimalistic map showing only the outline of America in the background. But even though there was such a strong personal preference for chartjunk, participants actually performed significantly worse using this map, both in terms of accuracy and response time. What I found particularly interesting about this study is that a complex use case (like meterology) is really common for us designers/cartographers/data analyst. Often times it is not just about some little bar chart, but we need to design entire dashboards, thematic maps, Sankey diagrams,... Cutting down on your chart junk does have improve your visualizations in this context a lot of times.
Experimental evidence supporting Tufte-style visualizations? There was one really good study in the field of cartography (Hegarty et al. (2009): Naïve cartography: How intuitions about display configuration can hurt performance. Published in: Cartographica The
7,269
Benefits of stratified vs random sampling for generating training data in classification
Stratified sampling aims at splitting a data set so that each split is similar with respect to something. In a classification setting, it is often chosen to ensure that the train and test sets have approximately the same percentage of samples of each target class as the complete set. As a result, if the data set has a large amount of each class, stratified sampling is pretty much the same as random sampling. But if one class isn't much represented in the data set, which may be the case in your dataset since you plan to oversample the minority class, then stratified sampling may yield a different target class distribution in the train and test sets than what random sampling may yield. Note that the stratified sampling may also be designed to equally distribute some features in the next train and test sets. For example, if each sample represents one individual, and one feature is age, it is sometimes useful to have the same age distribution in both the train and test set. FYI: Why use stratified cross validation? Why does this not damage variance related benefit? Understanding stratified cross-validation
Benefits of stratified vs random sampling for generating training data in classification
Stratified sampling aims at splitting a data set so that each split is similar with respect to something. In a classification setting, it is often chosen to ensure that the train and test sets have ap
Benefits of stratified vs random sampling for generating training data in classification Stratified sampling aims at splitting a data set so that each split is similar with respect to something. In a classification setting, it is often chosen to ensure that the train and test sets have approximately the same percentage of samples of each target class as the complete set. As a result, if the data set has a large amount of each class, stratified sampling is pretty much the same as random sampling. But if one class isn't much represented in the data set, which may be the case in your dataset since you plan to oversample the minority class, then stratified sampling may yield a different target class distribution in the train and test sets than what random sampling may yield. Note that the stratified sampling may also be designed to equally distribute some features in the next train and test sets. For example, if each sample represents one individual, and one feature is age, it is sometimes useful to have the same age distribution in both the train and test set. FYI: Why use stratified cross validation? Why does this not damage variance related benefit? Understanding stratified cross-validation
Benefits of stratified vs random sampling for generating training data in classification Stratified sampling aims at splitting a data set so that each split is similar with respect to something. In a classification setting, it is often chosen to ensure that the train and test sets have ap
7,270
Why are Gaussian process models called non-parametric?
I'll preface this by saying that it isn't always clear what one means by "nonparametric" or "semiparametric" etc. In the comments, it seems likely that whuber has some formal definition in mind (maybe something like choosing a model $M_\theta$ from some family $\{M_\theta: \theta \in \Theta\}$ where $\Theta$ is infinite dimensional), but I'm going to be pretty informal. Some might argue that a nonparametric method is one where the effective number of parameters you use increases with the data. I think there is a video on videolectures.net where (I think) Peter Orbanz gives four or five different takes on how we can define "nonparametric." Since I think I know what sorts of things you have in mind, for simplicity I'll assume that you are talking about using Gaussian processes for regression, in a typical way: we have training data $(Y_i, X_i), i = 1, ..., n$ and we are interested in modeling the conditional mean $E(Y|X = x) := f(x)$. We write $$ Y_i = f(X_i) + \epsilon_i $$ and perhaps we are so bold as to assume that the $\epsilon_i$ are iid and normally distributed, $\epsilon_i \sim N(0, \sigma^2)$. $X_i$ will be one dimensional, but everything carries over to higher dimensions. If our $X_i$ can take values in a continuum then $f(\cdot)$ can be thought of as a parameter of (uncountably) infinite dimension. So, in the sense that we are estimating a parameter of infinite dimension, our problem is a nonparametric one. It is true that the Bayesian approach has some parameters floating about here and there. But really, it is called nonparametric because we are estimating something of infinite dimension. The GP priors we use assign mass to every neighborhood of every continuous function, so they can estimate any continuous function arbitrarily well. The things in the covariance function are playing a role similar to the smoothing parameters in the usual frequentist estimators - in order for the problem to not be absolutely hopeless we have to assume that there is some structure that we expect to see $f$ exhibit. Bayesians accomplish this by using a prior on the space of continuous functions in the form of a Gaussian process. From a Bayesian perspective, we are encoding beliefs about $f$ by assuming $f$ is drawn from a GP with such-and-such covariance function. The prior effectively penalizes estimates of $f$ for being too complicated. Edit for computational issues Most (all?) of this stuff is in the Gaussian Process book by Rasmussen and Williams. Computational issues are tricky for GPs. If we proceed niavely we will need $O(N^2)$ size memory just to hold the covariance matrix and (it turns out) $O(N^3)$ operations to invert it. There are a few things we can do to make things more feasible. One option is to note that guy that we really need is $v$, the solution to $(K + \sigma^2 I)v = Y$ where $K$ is the covariance matrix. The method of conjugate gradients solves this exactly in $O(N^3)$ computations, but if we satisfy ourselves with an approximate solution we could terminate the conjugate gradient algorithm after $k$ steps and do it in $O(kN^2)$ computations. We also don't necessarily need to store the whole matrix $K$ at once. So we've moved from $O(N^3)$ to $O(kN^2)$, but this still scales quadratically in $N$, so we might not be happy. The next best thing is to work instead with a subset of the data, say of size $m$ where inverting and storing an $m \times m$ matrix isn't so bad. Of course, we don't want to just throw away the remaining data. The subset of regressors approach notes that we can derive the posterior mean of our GP as a regression of our data $Y$ on $N$ data-dependent basis functions determined by our covariance function; so we throw all but $m$ of these away and we are down to $O(m^2 N)$ computations. A couple of other potential options exist. We could construct a low-rank approximation to $K$, and set $K = QQ^T$ where $Q$ is $n \times q$ and of rank $q$; it turns inverting $K + \sigma^2 I$ in this case can be done by instead inverting $Q^TQ + \sigma^2 I$. Another option is to choose the covariance function to be sparse and use conjugate gradient methods - if the covariance matrix is very sparse then this can speed up computations substantially.
Why are Gaussian process models called non-parametric?
I'll preface this by saying that it isn't always clear what one means by "nonparametric" or "semiparametric" etc. In the comments, it seems likely that whuber has some formal definition in mind (maybe
Why are Gaussian process models called non-parametric? I'll preface this by saying that it isn't always clear what one means by "nonparametric" or "semiparametric" etc. In the comments, it seems likely that whuber has some formal definition in mind (maybe something like choosing a model $M_\theta$ from some family $\{M_\theta: \theta \in \Theta\}$ where $\Theta$ is infinite dimensional), but I'm going to be pretty informal. Some might argue that a nonparametric method is one where the effective number of parameters you use increases with the data. I think there is a video on videolectures.net where (I think) Peter Orbanz gives four or five different takes on how we can define "nonparametric." Since I think I know what sorts of things you have in mind, for simplicity I'll assume that you are talking about using Gaussian processes for regression, in a typical way: we have training data $(Y_i, X_i), i = 1, ..., n$ and we are interested in modeling the conditional mean $E(Y|X = x) := f(x)$. We write $$ Y_i = f(X_i) + \epsilon_i $$ and perhaps we are so bold as to assume that the $\epsilon_i$ are iid and normally distributed, $\epsilon_i \sim N(0, \sigma^2)$. $X_i$ will be one dimensional, but everything carries over to higher dimensions. If our $X_i$ can take values in a continuum then $f(\cdot)$ can be thought of as a parameter of (uncountably) infinite dimension. So, in the sense that we are estimating a parameter of infinite dimension, our problem is a nonparametric one. It is true that the Bayesian approach has some parameters floating about here and there. But really, it is called nonparametric because we are estimating something of infinite dimension. The GP priors we use assign mass to every neighborhood of every continuous function, so they can estimate any continuous function arbitrarily well. The things in the covariance function are playing a role similar to the smoothing parameters in the usual frequentist estimators - in order for the problem to not be absolutely hopeless we have to assume that there is some structure that we expect to see $f$ exhibit. Bayesians accomplish this by using a prior on the space of continuous functions in the form of a Gaussian process. From a Bayesian perspective, we are encoding beliefs about $f$ by assuming $f$ is drawn from a GP with such-and-such covariance function. The prior effectively penalizes estimates of $f$ for being too complicated. Edit for computational issues Most (all?) of this stuff is in the Gaussian Process book by Rasmussen and Williams. Computational issues are tricky for GPs. If we proceed niavely we will need $O(N^2)$ size memory just to hold the covariance matrix and (it turns out) $O(N^3)$ operations to invert it. There are a few things we can do to make things more feasible. One option is to note that guy that we really need is $v$, the solution to $(K + \sigma^2 I)v = Y$ where $K$ is the covariance matrix. The method of conjugate gradients solves this exactly in $O(N^3)$ computations, but if we satisfy ourselves with an approximate solution we could terminate the conjugate gradient algorithm after $k$ steps and do it in $O(kN^2)$ computations. We also don't necessarily need to store the whole matrix $K$ at once. So we've moved from $O(N^3)$ to $O(kN^2)$, but this still scales quadratically in $N$, so we might not be happy. The next best thing is to work instead with a subset of the data, say of size $m$ where inverting and storing an $m \times m$ matrix isn't so bad. Of course, we don't want to just throw away the remaining data. The subset of regressors approach notes that we can derive the posterior mean of our GP as a regression of our data $Y$ on $N$ data-dependent basis functions determined by our covariance function; so we throw all but $m$ of these away and we are down to $O(m^2 N)$ computations. A couple of other potential options exist. We could construct a low-rank approximation to $K$, and set $K = QQ^T$ where $Q$ is $n \times q$ and of rank $q$; it turns inverting $K + \sigma^2 I$ in this case can be done by instead inverting $Q^TQ + \sigma^2 I$. Another option is to choose the covariance function to be sparse and use conjugate gradient methods - if the covariance matrix is very sparse then this can speed up computations substantially.
Why are Gaussian process models called non-parametric? I'll preface this by saying that it isn't always clear what one means by "nonparametric" or "semiparametric" etc. In the comments, it seems likely that whuber has some formal definition in mind (maybe
7,271
Why are Gaussian process models called non-parametric?
Generally speaking, the "nonparametric" in Bayesian nonparametrics refers to models with an infinite number of (potential) parameters. There are a lot of really nice tutorials and lectures on the subject on videolectures.net (like this one) which give nice overviews of this class of models. Specifically, the Gaussian Process (GP) is considered nonparametric because a GP represents a function (i.e. an infinite dimensional vector). As the number of data points increases ((x, f(x)) pairs), so do the number of model 'parameters' (restricting the shape of the function). Unlike a parametric model, where the number of parameters stay fixed with respect to the size of the data, in nonparametric models, the number of parameters grows with the number of data points.
Why are Gaussian process models called non-parametric?
Generally speaking, the "nonparametric" in Bayesian nonparametrics refers to models with an infinite number of (potential) parameters. There are a lot of really nice tutorials and lectures on the subj
Why are Gaussian process models called non-parametric? Generally speaking, the "nonparametric" in Bayesian nonparametrics refers to models with an infinite number of (potential) parameters. There are a lot of really nice tutorials and lectures on the subject on videolectures.net (like this one) which give nice overviews of this class of models. Specifically, the Gaussian Process (GP) is considered nonparametric because a GP represents a function (i.e. an infinite dimensional vector). As the number of data points increases ((x, f(x)) pairs), so do the number of model 'parameters' (restricting the shape of the function). Unlike a parametric model, where the number of parameters stay fixed with respect to the size of the data, in nonparametric models, the number of parameters grows with the number of data points.
Why are Gaussian process models called non-parametric? Generally speaking, the "nonparametric" in Bayesian nonparametrics refers to models with an infinite number of (potential) parameters. There are a lot of really nice tutorials and lectures on the subj
7,272
Why are Gaussian process models called non-parametric?
The parameters that you referred to as hyperparameters are not physically motivated parameters and hence the name. They are used to solely parameterize the kernel function. To give an example, in a Gaussian kernel: $K(x_i,x_j) = h^2 \exp(\frac{-(x_i - x_j)^2}{\lambda^2})$ the $h$ and $\lambda$ are the hyperparameters but they do not relate to quantities such as temperature, pollution concentration, etc., which you might encounter in a true parametric model. This issue was addressed in this lecture as well, it might help to get better understanding.
Why are Gaussian process models called non-parametric?
The parameters that you referred to as hyperparameters are not physically motivated parameters and hence the name. They are used to solely parameterize the kernel function. To give an example, in a Ga
Why are Gaussian process models called non-parametric? The parameters that you referred to as hyperparameters are not physically motivated parameters and hence the name. They are used to solely parameterize the kernel function. To give an example, in a Gaussian kernel: $K(x_i,x_j) = h^2 \exp(\frac{-(x_i - x_j)^2}{\lambda^2})$ the $h$ and $\lambda$ are the hyperparameters but they do not relate to quantities such as temperature, pollution concentration, etc., which you might encounter in a true parametric model. This issue was addressed in this lecture as well, it might help to get better understanding.
Why are Gaussian process models called non-parametric? The parameters that you referred to as hyperparameters are not physically motivated parameters and hence the name. They are used to solely parameterize the kernel function. To give an example, in a Ga
7,273
Are inconsistent estimators ever preferable?
This answer describes a realistic problem where a natural consistent estimator is dominated (outperformed for all possible parameter values for all sample sizes) by an inconsistent estimator. It is motivated by the idea that consistency is best suited for quadratic losses, so using a loss departing strongly from that (such as an asymmetric loss) should render consistency almost useless in evaluating the performance of estimators. Suppose your client wishes to estimate the mean of a variable (assumed to have a symmetric distribution) from an iid sample $(x_1, \ldots, x_n)$, but they are averse to either (a) underestimating it or (b) grossly overestimating it. To see how this might work out, let us adopt a simple loss function, understanding that in practice the loss might differ from this one quantitatively (but not qualitatively). Choose units of measurement so that $1$ is the largest tolerable overestimate and set the loss of an estimate $t$ when the true mean is $\mu$ to equal $0$ whenever $\mu \le t\le \mu+1$ and equal to $1$ otherwise. The calculations are particularly simple for a Normal family of distributions with mean $\mu$ and variance $\sigma^2 \gt 0$, for then the sample mean $\bar{x}=\frac{1}{n}\sum_i x_i$ has a Normal$(\mu, \sigma^2/n)$ distribution. The sample mean is a consistent estimator of $\mu$, as is well known (and obvious). Writing $\Phi$ for the standard normal CDF, the expected loss of the sample mean equals $1/2 + \Phi(-\sqrt{n}/\sigma)$: $1/2$ comes from the 50% chance that the sample mean will underestimate the true mean and $\Phi(-\sqrt{n}/\sigma)$ comes from the chance of overestimating the true mean by more than $1$. The expected loss of $\bar{x}$ equals the blue area under this standard normal PDF. The red area gives the expected loss of the alternative estimator, below. They differ by replacing the solid blue area between $-\sqrt{n}/(2\sigma)$ and $0$ by the smaller solid red area between $\sqrt{n}/(2\sigma)$ and $\sqrt{n}/\sigma$. That difference grows as $n$ increases. An alternative estimator given by $\bar{x}+1/2$ has an expected loss of $2\Phi(-\sqrt{n}/(2\sigma))$. The symmetry and unimodality of normal distributions imply its expected loss is always better than that of the sample mean. (This makes the sample mean inadmissible for this loss.) Indeed, the expected loss of the sample mean has a lower limit of $1/2$ whereas that of the alternative converges to $0$ as $n$ grows. However, the alternative clearly is inconsistent: as $n$ grows, it converges in probability to $\mu+1/2 \ne \mu$. Blue dots show loss for $\bar{x}$ and red dots show loss for $\bar{x}+1/2$ as a function of sample size $n$.
Are inconsistent estimators ever preferable?
This answer describes a realistic problem where a natural consistent estimator is dominated (outperformed for all possible parameter values for all sample sizes) by an inconsistent estimator. It is m
Are inconsistent estimators ever preferable? This answer describes a realistic problem where a natural consistent estimator is dominated (outperformed for all possible parameter values for all sample sizes) by an inconsistent estimator. It is motivated by the idea that consistency is best suited for quadratic losses, so using a loss departing strongly from that (such as an asymmetric loss) should render consistency almost useless in evaluating the performance of estimators. Suppose your client wishes to estimate the mean of a variable (assumed to have a symmetric distribution) from an iid sample $(x_1, \ldots, x_n)$, but they are averse to either (a) underestimating it or (b) grossly overestimating it. To see how this might work out, let us adopt a simple loss function, understanding that in practice the loss might differ from this one quantitatively (but not qualitatively). Choose units of measurement so that $1$ is the largest tolerable overestimate and set the loss of an estimate $t$ when the true mean is $\mu$ to equal $0$ whenever $\mu \le t\le \mu+1$ and equal to $1$ otherwise. The calculations are particularly simple for a Normal family of distributions with mean $\mu$ and variance $\sigma^2 \gt 0$, for then the sample mean $\bar{x}=\frac{1}{n}\sum_i x_i$ has a Normal$(\mu, \sigma^2/n)$ distribution. The sample mean is a consistent estimator of $\mu$, as is well known (and obvious). Writing $\Phi$ for the standard normal CDF, the expected loss of the sample mean equals $1/2 + \Phi(-\sqrt{n}/\sigma)$: $1/2$ comes from the 50% chance that the sample mean will underestimate the true mean and $\Phi(-\sqrt{n}/\sigma)$ comes from the chance of overestimating the true mean by more than $1$. The expected loss of $\bar{x}$ equals the blue area under this standard normal PDF. The red area gives the expected loss of the alternative estimator, below. They differ by replacing the solid blue area between $-\sqrt{n}/(2\sigma)$ and $0$ by the smaller solid red area between $\sqrt{n}/(2\sigma)$ and $\sqrt{n}/\sigma$. That difference grows as $n$ increases. An alternative estimator given by $\bar{x}+1/2$ has an expected loss of $2\Phi(-\sqrt{n}/(2\sigma))$. The symmetry and unimodality of normal distributions imply its expected loss is always better than that of the sample mean. (This makes the sample mean inadmissible for this loss.) Indeed, the expected loss of the sample mean has a lower limit of $1/2$ whereas that of the alternative converges to $0$ as $n$ grows. However, the alternative clearly is inconsistent: as $n$ grows, it converges in probability to $\mu+1/2 \ne \mu$. Blue dots show loss for $\bar{x}$ and red dots show loss for $\bar{x}+1/2$ as a function of sample size $n$.
Are inconsistent estimators ever preferable? This answer describes a realistic problem where a natural consistent estimator is dominated (outperformed for all possible parameter values for all sample sizes) by an inconsistent estimator. It is m
7,274
Are inconsistent estimators ever preferable?
Here is a very real situation where an inconsistent estimators is preferable due to constraints on sampling. I point to a variation of 'Importance Sampling' in Sampling theory would most likely constitute an inconsistent but improved estimator of the sample mean, where the correct percentage weighting of this class is not known (or, the subject of investigation), but it itself, is selected as 'the best available estimate'. For example, take a poor country where a large percent of the population do not have bank accounts. Assume you were given access to spending data for those with accounts to develop figures for the nation as a whole. This would clearly represent closely the actual countries spending pattern, but because of the precise impact of unreported cash income and differing spending among those without bank accounts, this is not expected to completely 'consistent' with the countries actual total domestic spending. The large size weighting of those with bank accounts clearly still makes it superior, albeit distorted, over the sampling variance expected in a simple random strategy scheme. Note, no matter how precisely one gathers the samples in the 'Importance Sampling' strata alone (so mathematically the estimate converges in probability to the this class's true value), it still remains inconsistent estimator for the parent population (as limitations on out-of-class sampling implies it cannot converges in probability to produce a combined estimator for the parent populations mean).
Are inconsistent estimators ever preferable?
Here is a very real situation where an inconsistent estimators is preferable due to constraints on sampling. I point to a variation of 'Importance Sampling' in Sampling theory would most likely consti
Are inconsistent estimators ever preferable? Here is a very real situation where an inconsistent estimators is preferable due to constraints on sampling. I point to a variation of 'Importance Sampling' in Sampling theory would most likely constitute an inconsistent but improved estimator of the sample mean, where the correct percentage weighting of this class is not known (or, the subject of investigation), but it itself, is selected as 'the best available estimate'. For example, take a poor country where a large percent of the population do not have bank accounts. Assume you were given access to spending data for those with accounts to develop figures for the nation as a whole. This would clearly represent closely the actual countries spending pattern, but because of the precise impact of unreported cash income and differing spending among those without bank accounts, this is not expected to completely 'consistent' with the countries actual total domestic spending. The large size weighting of those with bank accounts clearly still makes it superior, albeit distorted, over the sampling variance expected in a simple random strategy scheme. Note, no matter how precisely one gathers the samples in the 'Importance Sampling' strata alone (so mathematically the estimate converges in probability to the this class's true value), it still remains inconsistent estimator for the parent population (as limitations on out-of-class sampling implies it cannot converges in probability to produce a combined estimator for the parent populations mean).
Are inconsistent estimators ever preferable? Here is a very real situation where an inconsistent estimators is preferable due to constraints on sampling. I point to a variation of 'Importance Sampling' in Sampling theory would most likely consti
7,275
Are inconsistent estimators ever preferable?
More specifically, are there examples of an inconsistent estimator which outperforms a reasonable consistent estimator for all finite n (with respect to some suitable loss function)? Yes there are, and probably are more simpler and usual than you think. Moreover complex or unusual loss functions are not needed about that, usual MSE is enough. The crucial concept here is bias-variance trade-off. Even in simple linear models setting, the wrong/misspecified model, that involve biased and inconsistent estimators for parameters and entire function, can be better then the correct one if our goal is prediction. Now, prediction is very relevant in real world. The example is simple, you can think about a true model like this: $y = \beta_1 x_1 + \beta_2 x_2 + \epsilon$ you can estimate several linear regression; a short like this: $y = \theta_1 x_1 + u$ or longer that can also represent the empirical counterpart of true model. Now, the short regression is wrong (involve inconsistent and biased parameters and function) however is not sure that the longer (consistent) is better for prediction (MSE loss). Note that this story hold precisely in finite sample scheme, as you requested. Not asymptotically. My point is clearly and exhaustively explained in: Shmueli - To explain or to predict - Statistical Science 2010, Vol. 25, No. 3, 289–310. EDIT. For clarification I add something that, I hope, can be useful to the readers. I use, as in the article cited, the concept of bias in quite general way. It can be spent in both case: unbiased and consistent estimators. These two things different but the story above hold in both case. From now I speak about bias and we can spend it against consistency also (so, biased estimators = inconsistent estimators). The concept of bias are usually refers on parameters (let me refers on Wikipedia: https://en.wikipedia.org/wiki/Consistent_estimator#Bias_versus_consistency; https://en.wikipedia.org/wiki/Bias_of_an_estimator. However is possible to spend it more in general also. Suffice to say that not all estimated statistical models (say $f$) are parametric but all them can be biased in comparison to the true models (say $F$). Maybe in this way we can conflate consistency and misspecification problems but in my knowledge these two can be viewed as two face of the same coin. Now the short estimated model (OLS regression) above $f_{short}$ is biased in comparison to the related true model $F$. Otherwise we can estimate another regression, say $f_{long}$ where all correct dependent variables are included, and potentially others are added. So $f_{long}$ is a consistent estimator for $F$. If we estimate $f_{true}$ where all and only the correct dependent variables are included we stay in the best case; or at least it seem so. Often this is the paradigm in econometrics, the field where I’m more confident. However in Shmueli (2010) is pointed out that explanation (causal inference) and prediction are different goals even if often them are erroneously conflated. Infact, at least if $n$ is finite, ever in practice, $f_{short}$ can be better than $f_{true}$ if our goal is prediction. I cannot give you an actual example here. The favourable conditions are listed in the article and also in this related and interesting question (Paradox in model selection (AIC, BIC, to explain or to predict?)); them come from an example like above. Let me note that, until a few years ago, in econometrics literature this fact (bias-variance story) was highly undervalued but in machine learning literature is not the case. For example LASSO and RIDGE estimators, absent in many general econometrics textbooks but usual in machine learning ones, make sense primarily because the story above hold. Moreover we can consider parameters perspective also. In the example above $\theta_1$ come from the short regression and, taking apart few special cases, is biased in comparison to $\beta_1$. This fact come from the omitted variable bias story, that is an classic argument in any econometric textbooks. Now if we are precisely interested in $\beta$s this problem must be resolved but for prediction goals non necessarily. In the last case $f_{short}$ and therefore $\theta_1$ can be better than consistent estimators, therefore $f_{true}$ and its parameters. Now we have to face a nuisance question. Consistency is an asymptotic property, however this not mean that we can speak about consistency only in theoretical case where we have $n=\inf$. Consistency, in any form, is useful in practice only because if $n$ is large we can say that this property hold. Unfortunately in most case we do not have a precise number for $n$ but sometimes we have an idea. Frequently consistency is simply viewed as weaker condition than unbiasedness, because in many practical case unbiased estimators are also consistent ones. In practice we often can speak about consistency and not about unbiased because the former can to hold and the last surely not, in econometrics it is almost always so. However, also in these case, is absolutely not the case that bias-variance trade-off, in the sense above, disappear. Idea like this is precisely the ones that leave us in dramatic errors that Shmueli (2010) underscore. We have to remember that $n$ can be large enough for some things and not for others, in the same model also. Usually we know nothing about that. Last point. Bias-variance story, referred on usual MSE loss, can be spent also in another direction that is completely focused on parameters estimation. Any estimator have his mean and variance. Now, if an estimator is biased but have also lower variance than a competitor that is unbiased and/or consistent, is not obvious what is better. There is exactly a bias-variance trade-off, as explained in: Murphy (2012) - Machine Learning: A Probabilistic Perspective; pag 202.
Are inconsistent estimators ever preferable?
More specifically, are there examples of an inconsistent estimator which outperforms a reasonable consistent estimator for all finite n (with respect to some suitable loss function)? Yes there ar
Are inconsistent estimators ever preferable? More specifically, are there examples of an inconsistent estimator which outperforms a reasonable consistent estimator for all finite n (with respect to some suitable loss function)? Yes there are, and probably are more simpler and usual than you think. Moreover complex or unusual loss functions are not needed about that, usual MSE is enough. The crucial concept here is bias-variance trade-off. Even in simple linear models setting, the wrong/misspecified model, that involve biased and inconsistent estimators for parameters and entire function, can be better then the correct one if our goal is prediction. Now, prediction is very relevant in real world. The example is simple, you can think about a true model like this: $y = \beta_1 x_1 + \beta_2 x_2 + \epsilon$ you can estimate several linear regression; a short like this: $y = \theta_1 x_1 + u$ or longer that can also represent the empirical counterpart of true model. Now, the short regression is wrong (involve inconsistent and biased parameters and function) however is not sure that the longer (consistent) is better for prediction (MSE loss). Note that this story hold precisely in finite sample scheme, as you requested. Not asymptotically. My point is clearly and exhaustively explained in: Shmueli - To explain or to predict - Statistical Science 2010, Vol. 25, No. 3, 289–310. EDIT. For clarification I add something that, I hope, can be useful to the readers. I use, as in the article cited, the concept of bias in quite general way. It can be spent in both case: unbiased and consistent estimators. These two things different but the story above hold in both case. From now I speak about bias and we can spend it against consistency also (so, biased estimators = inconsistent estimators). The concept of bias are usually refers on parameters (let me refers on Wikipedia: https://en.wikipedia.org/wiki/Consistent_estimator#Bias_versus_consistency; https://en.wikipedia.org/wiki/Bias_of_an_estimator. However is possible to spend it more in general also. Suffice to say that not all estimated statistical models (say $f$) are parametric but all them can be biased in comparison to the true models (say $F$). Maybe in this way we can conflate consistency and misspecification problems but in my knowledge these two can be viewed as two face of the same coin. Now the short estimated model (OLS regression) above $f_{short}$ is biased in comparison to the related true model $F$. Otherwise we can estimate another regression, say $f_{long}$ where all correct dependent variables are included, and potentially others are added. So $f_{long}$ is a consistent estimator for $F$. If we estimate $f_{true}$ where all and only the correct dependent variables are included we stay in the best case; or at least it seem so. Often this is the paradigm in econometrics, the field where I’m more confident. However in Shmueli (2010) is pointed out that explanation (causal inference) and prediction are different goals even if often them are erroneously conflated. Infact, at least if $n$ is finite, ever in practice, $f_{short}$ can be better than $f_{true}$ if our goal is prediction. I cannot give you an actual example here. The favourable conditions are listed in the article and also in this related and interesting question (Paradox in model selection (AIC, BIC, to explain or to predict?)); them come from an example like above. Let me note that, until a few years ago, in econometrics literature this fact (bias-variance story) was highly undervalued but in machine learning literature is not the case. For example LASSO and RIDGE estimators, absent in many general econometrics textbooks but usual in machine learning ones, make sense primarily because the story above hold. Moreover we can consider parameters perspective also. In the example above $\theta_1$ come from the short regression and, taking apart few special cases, is biased in comparison to $\beta_1$. This fact come from the omitted variable bias story, that is an classic argument in any econometric textbooks. Now if we are precisely interested in $\beta$s this problem must be resolved but for prediction goals non necessarily. In the last case $f_{short}$ and therefore $\theta_1$ can be better than consistent estimators, therefore $f_{true}$ and its parameters. Now we have to face a nuisance question. Consistency is an asymptotic property, however this not mean that we can speak about consistency only in theoretical case where we have $n=\inf$. Consistency, in any form, is useful in practice only because if $n$ is large we can say that this property hold. Unfortunately in most case we do not have a precise number for $n$ but sometimes we have an idea. Frequently consistency is simply viewed as weaker condition than unbiasedness, because in many practical case unbiased estimators are also consistent ones. In practice we often can speak about consistency and not about unbiased because the former can to hold and the last surely not, in econometrics it is almost always so. However, also in these case, is absolutely not the case that bias-variance trade-off, in the sense above, disappear. Idea like this is precisely the ones that leave us in dramatic errors that Shmueli (2010) underscore. We have to remember that $n$ can be large enough for some things and not for others, in the same model also. Usually we know nothing about that. Last point. Bias-variance story, referred on usual MSE loss, can be spent also in another direction that is completely focused on parameters estimation. Any estimator have his mean and variance. Now, if an estimator is biased but have also lower variance than a competitor that is unbiased and/or consistent, is not obvious what is better. There is exactly a bias-variance trade-off, as explained in: Murphy (2012) - Machine Learning: A Probabilistic Perspective; pag 202.
Are inconsistent estimators ever preferable? More specifically, are there examples of an inconsistent estimator which outperforms a reasonable consistent estimator for all finite n (with respect to some suitable loss function)? Yes there ar
7,276
Are inconsistent estimators ever preferable?
I can't comment, so I will add this as an answer. Whuber answer is just showing that one specific inconsistent estimator can be better than another specific consistent estimator. Since the questions was: "are there examples of an inconsistent estimator which outperforms a reasonable consistent estimator for all finite n" then of course his answer is ok. However, this answer may give readers the impression that one needs to use an inconsistent estimator, and this is clearly not the case here. For instance, in Whuber's case we can then take the estimator to be the upper end of a confidence interval, which will only underestimate the true mean at a chosen significance level, and thus will be superior to the mean itself. This estimator is still consistent, since the upper end of the confidence interval converges to the true $\mu$ as the sample size increases.
Are inconsistent estimators ever preferable?
I can't comment, so I will add this as an answer. Whuber answer is just showing that one specific inconsistent estimator can be better than another specific consistent estimator. Since the questions w
Are inconsistent estimators ever preferable? I can't comment, so I will add this as an answer. Whuber answer is just showing that one specific inconsistent estimator can be better than another specific consistent estimator. Since the questions was: "are there examples of an inconsistent estimator which outperforms a reasonable consistent estimator for all finite n" then of course his answer is ok. However, this answer may give readers the impression that one needs to use an inconsistent estimator, and this is clearly not the case here. For instance, in Whuber's case we can then take the estimator to be the upper end of a confidence interval, which will only underestimate the true mean at a chosen significance level, and thus will be superior to the mean itself. This estimator is still consistent, since the upper end of the confidence interval converges to the true $\mu$ as the sample size increases.
Are inconsistent estimators ever preferable? I can't comment, so I will add this as an answer. Whuber answer is just showing that one specific inconsistent estimator can be better than another specific consistent estimator. Since the questions w
7,277
Equivalence between least squares and MLE in Gaussian model
In the model $ Y = X \beta + \epsilon $ where $\epsilon \sim N(0,\sigma^{2})$, the loglikelihood of $Y|X$ for a sample of $n$ subjects is (up to a additive constant) $$ \frac{-n}{2} \log(\sigma^{2}) - \frac{1}{2 \sigma^{2}} \sum_{i=1}^{n} (y_{i}-x_{i} \beta)^{2} $$ viewed as a function of only $\beta$, the maximizer is exactly that which minimizes $$ \sum_{i=1}^{n} (y_{i}-x_{i} \beta)^{2} $$ does this make the equivalence clear?
Equivalence between least squares and MLE in Gaussian model
In the model $ Y = X \beta + \epsilon $ where $\epsilon \sim N(0,\sigma^{2})$, the loglikelihood of $Y|X$ for a sample of $n$ subjects is (up to a additive constant) $$ \frac{-n}{2} \log(\sigma^{2})
Equivalence between least squares and MLE in Gaussian model In the model $ Y = X \beta + \epsilon $ where $\epsilon \sim N(0,\sigma^{2})$, the loglikelihood of $Y|X$ for a sample of $n$ subjects is (up to a additive constant) $$ \frac{-n}{2} \log(\sigma^{2}) - \frac{1}{2 \sigma^{2}} \sum_{i=1}^{n} (y_{i}-x_{i} \beta)^{2} $$ viewed as a function of only $\beta$, the maximizer is exactly that which minimizes $$ \sum_{i=1}^{n} (y_{i}-x_{i} \beta)^{2} $$ does this make the equivalence clear?
Equivalence between least squares and MLE in Gaussian model In the model $ Y = X \beta + \epsilon $ where $\epsilon \sim N(0,\sigma^{2})$, the loglikelihood of $Y|X$ for a sample of $n$ subjects is (up to a additive constant) $$ \frac{-n}{2} \log(\sigma^{2})
7,278
Does Cox Regression have an underlying Poisson distribution?
Yes, there is a link between these two regression models. Here is an illustration: Suppose the baseline hazard is constant over time: $h_{0}(t) = \lambda$. In that case, the survival function is $S(t) = \exp\left(-\int_{0}^{t} \lambda du\right) = \exp(-\lambda t)$ and the density function is $f(t) = h(t) S(t) = \lambda \exp(-\lambda t)$ This is the pdf of an exponential random variable with expectation $\lambda^{-1}$. Such a configuration yields the following parametric Cox model (with obvious notations): $h_{i}(t) = \lambda \exp(x'_{i} \beta)$ In the parametric setting the parameters are estimated using the classical likelihood method. The log-likelihood is given by $l = \sum_{i} \left\{ d_{i}\log(h_{i}(t_{i})) - t_{i} h_{i}(t_{i}) \right\}$ where $d_{i}$ is the event indicator. Up to an additive constant, this is nothing but the same expression as the log-likelihood of the $d_{i}$'s seen as realizations of a Poisson variable with mean $\mu_{i} = t_{i}h_{i}(t)$. As a consequence, one can obtain estimates using the following Poisson model: $\log(\mu_{i}) = \log(t_{i}) + \beta_0 + x_{i}'\beta$ where $\beta_0 = \log(\lambda)$.
Does Cox Regression have an underlying Poisson distribution?
Yes, there is a link between these two regression models. Here is an illustration: Suppose the baseline hazard is constant over time: $h_{0}(t) = \lambda$. In that case, the survival function is $S(t)
Does Cox Regression have an underlying Poisson distribution? Yes, there is a link between these two regression models. Here is an illustration: Suppose the baseline hazard is constant over time: $h_{0}(t) = \lambda$. In that case, the survival function is $S(t) = \exp\left(-\int_{0}^{t} \lambda du\right) = \exp(-\lambda t)$ and the density function is $f(t) = h(t) S(t) = \lambda \exp(-\lambda t)$ This is the pdf of an exponential random variable with expectation $\lambda^{-1}$. Such a configuration yields the following parametric Cox model (with obvious notations): $h_{i}(t) = \lambda \exp(x'_{i} \beta)$ In the parametric setting the parameters are estimated using the classical likelihood method. The log-likelihood is given by $l = \sum_{i} \left\{ d_{i}\log(h_{i}(t_{i})) - t_{i} h_{i}(t_{i}) \right\}$ where $d_{i}$ is the event indicator. Up to an additive constant, this is nothing but the same expression as the log-likelihood of the $d_{i}$'s seen as realizations of a Poisson variable with mean $\mu_{i} = t_{i}h_{i}(t)$. As a consequence, one can obtain estimates using the following Poisson model: $\log(\mu_{i}) = \log(t_{i}) + \beta_0 + x_{i}'\beta$ where $\beta_0 = \log(\lambda)$.
Does Cox Regression have an underlying Poisson distribution? Yes, there is a link between these two regression models. Here is an illustration: Suppose the baseline hazard is constant over time: $h_{0}(t) = \lambda$. In that case, the survival function is $S(t)
7,279
Why Beta/Dirichlet Regression are not considered Generalized Linear Models?
Check the original reference: Ferrari, S., & Cribari-Neto, F. (2004). Beta regression for modelling rates and proportions. Journal of Applied Statistics, 31(7), 799-815. as the authors note, the parameters of re-parametrized beta distribution are correlated, so Note that the parameters $\beta$ and $\phi$ are not orthogonal, in contrast to what is verified in the class of generalized linear regression models (McCullagh and Nelder, 1989). So while the model looks like a GLM and quacks like a GLM, it does not perfectly fit the framework.
Why Beta/Dirichlet Regression are not considered Generalized Linear Models?
Check the original reference: Ferrari, S., & Cribari-Neto, F. (2004). Beta regression for modelling rates and proportions. Journal of Applied Statistics, 31(7), 799-815. as the authors note, the p
Why Beta/Dirichlet Regression are not considered Generalized Linear Models? Check the original reference: Ferrari, S., & Cribari-Neto, F. (2004). Beta regression for modelling rates and proportions. Journal of Applied Statistics, 31(7), 799-815. as the authors note, the parameters of re-parametrized beta distribution are correlated, so Note that the parameters $\beta$ and $\phi$ are not orthogonal, in contrast to what is verified in the class of generalized linear regression models (McCullagh and Nelder, 1989). So while the model looks like a GLM and quacks like a GLM, it does not perfectly fit the framework.
Why Beta/Dirichlet Regression are not considered Generalized Linear Models? Check the original reference: Ferrari, S., & Cribari-Neto, F. (2004). Beta regression for modelling rates and proportions. Journal of Applied Statistics, 31(7), 799-815. as the authors note, the p
7,280
Why Beta/Dirichlet Regression are not considered Generalized Linear Models?
The answer by @probabilityislogic is on the right track. The beta distribution is in the two parameter exponential family. The simple GLM models described by Nelder and Wedderburn (1972) do not include all of the distributions in the two parameter exponential family. In terms of the article by N&W, the GLM applies to the density functions of the following type (this was later named exponential dispersion family in Jørgensen 1987): $$\pi(z;\theta,\phi) = \exp \left[ \alpha(\phi) \lbrace z\theta - g(\theta) +h(z)\rbrace +\beta(\phi,z) \right]$$ with an additional link function $f()$ and linear model for the natural parameter $\theta = f(\mu) = f(X\beta)$. So we could rewrite the above distribution also: $$\pi(z;\mu,\phi) = exp \left[z(f(\mu)\alpha(\phi)) +h(z)\alpha(\phi) - g(f(\mu))\alpha(\phi) +\beta(\phi,z) \right]$$ The two parameter exponential family is: $$ f(z;\theta_1,\theta_2) = exp \left[T_1(z)\eta_1(\theta_1,\theta_2) + T_2(z)\eta_2(\theta_1,\theta_2) - g(\theta_1,\theta_2) +h(z) \right] $$ which looks similar but more general (also if one of the $\theta$ is constant). The difference is clear, and also putting the beta distribution in a form as a GLM is not possible. However, I lack sufficient understanding to create a more intuitive and well informed answer (I have a feeling that there can be much deeper and more elegant relationships to a variety of fundamental principles). The GLM generalizes the distribution of the error by using a single variate exponential dispersion model in place of a least squares model and generalizes the linear relationship in the mean, by using a link function. The best and most simple intuition seems to be the dispersion-$\alpha(\phi)$-term in the exponential, which gets multiplied with everything and thus the dispersion does not vary with $\theta$. Whereas several two parameter exponential families, and quasi-likelihood methods, allow the dispersion parameter to be a function of $\theta$ as well.
Why Beta/Dirichlet Regression are not considered Generalized Linear Models?
The answer by @probabilityislogic is on the right track. The beta distribution is in the two parameter exponential family. The simple GLM models described by Nelder and Wedderburn (1972) do not includ
Why Beta/Dirichlet Regression are not considered Generalized Linear Models? The answer by @probabilityislogic is on the right track. The beta distribution is in the two parameter exponential family. The simple GLM models described by Nelder and Wedderburn (1972) do not include all of the distributions in the two parameter exponential family. In terms of the article by N&W, the GLM applies to the density functions of the following type (this was later named exponential dispersion family in Jørgensen 1987): $$\pi(z;\theta,\phi) = \exp \left[ \alpha(\phi) \lbrace z\theta - g(\theta) +h(z)\rbrace +\beta(\phi,z) \right]$$ with an additional link function $f()$ and linear model for the natural parameter $\theta = f(\mu) = f(X\beta)$. So we could rewrite the above distribution also: $$\pi(z;\mu,\phi) = exp \left[z(f(\mu)\alpha(\phi)) +h(z)\alpha(\phi) - g(f(\mu))\alpha(\phi) +\beta(\phi,z) \right]$$ The two parameter exponential family is: $$ f(z;\theta_1,\theta_2) = exp \left[T_1(z)\eta_1(\theta_1,\theta_2) + T_2(z)\eta_2(\theta_1,\theta_2) - g(\theta_1,\theta_2) +h(z) \right] $$ which looks similar but more general (also if one of the $\theta$ is constant). The difference is clear, and also putting the beta distribution in a form as a GLM is not possible. However, I lack sufficient understanding to create a more intuitive and well informed answer (I have a feeling that there can be much deeper and more elegant relationships to a variety of fundamental principles). The GLM generalizes the distribution of the error by using a single variate exponential dispersion model in place of a least squares model and generalizes the linear relationship in the mean, by using a link function. The best and most simple intuition seems to be the dispersion-$\alpha(\phi)$-term in the exponential, which gets multiplied with everything and thus the dispersion does not vary with $\theta$. Whereas several two parameter exponential families, and quasi-likelihood methods, allow the dispersion parameter to be a function of $\theta$ as well.
Why Beta/Dirichlet Regression are not considered Generalized Linear Models? The answer by @probabilityislogic is on the right track. The beta distribution is in the two parameter exponential family. The simple GLM models described by Nelder and Wedderburn (1972) do not includ
7,281
Why Beta/Dirichlet Regression are not considered Generalized Linear Models?
I don't think the beta distribution is part of the exponential dispersion family. To get this, you need to have a density $$f (y;\theta,\tau)=\exp\left (\frac {y\theta - c (\theta)}{\tau} + d (y,\tau)\right)$$ for specified functions $c ()$ and $d () $. The mean is given as $ c'(\theta)$ and the variance is given as $\tau c''(\theta) $. The parameter $\theta $ is called the canonical parameter. The beta distribution cannot be written this way - one way to see this is by noting there is no $y $ term in the log likelihood - it has $\log [y] $ and $\log [1-y] $ instead $$f_{beta}(y;\mu,\phi)=\exp\left (\phi\mu\log\left[\frac {y}{1-y}\right] +\phi\log [1-y] - \log [B (\phi\mu,\phi (1-\mu)]-\log\left[\frac {y}{1-y}\right]\right) $$ Yet another way to see that beta is not exponential dispersion family is that it can be written as $y=\frac {x}{x+z} $ where $x $ and $z $ are independent and both follow gamma distributions with the same scale parameter (and gamma is exponential family).
Why Beta/Dirichlet Regression are not considered Generalized Linear Models?
I don't think the beta distribution is part of the exponential dispersion family. To get this, you need to have a density $$f (y;\theta,\tau)=\exp\left (\frac {y\theta - c (\theta)}{\tau} + d (y,\tau)
Why Beta/Dirichlet Regression are not considered Generalized Linear Models? I don't think the beta distribution is part of the exponential dispersion family. To get this, you need to have a density $$f (y;\theta,\tau)=\exp\left (\frac {y\theta - c (\theta)}{\tau} + d (y,\tau)\right)$$ for specified functions $c ()$ and $d () $. The mean is given as $ c'(\theta)$ and the variance is given as $\tau c''(\theta) $. The parameter $\theta $ is called the canonical parameter. The beta distribution cannot be written this way - one way to see this is by noting there is no $y $ term in the log likelihood - it has $\log [y] $ and $\log [1-y] $ instead $$f_{beta}(y;\mu,\phi)=\exp\left (\phi\mu\log\left[\frac {y}{1-y}\right] +\phi\log [1-y] - \log [B (\phi\mu,\phi (1-\mu)]-\log\left[\frac {y}{1-y}\right]\right) $$ Yet another way to see that beta is not exponential dispersion family is that it can be written as $y=\frac {x}{x+z} $ where $x $ and $z $ are independent and both follow gamma distributions with the same scale parameter (and gamma is exponential family).
Why Beta/Dirichlet Regression are not considered Generalized Linear Models? I don't think the beta distribution is part of the exponential dispersion family. To get this, you need to have a density $$f (y;\theta,\tau)=\exp\left (\frac {y\theta - c (\theta)}{\tau} + d (y,\tau)
7,282
What are the most useful sources of economics data?
For the US: FRED: Federal Reserve Economic Data (the best) Bureau of Labor Statistics Bureau of Economic Analysis U.S. Census
What are the most useful sources of economics data?
For the US: FRED: Federal Reserve Economic Data (the best) Bureau of Labor Statistics Bureau of Economic Analysis U.S. Census
What are the most useful sources of economics data? For the US: FRED: Federal Reserve Economic Data (the best) Bureau of Labor Statistics Bureau of Economic Analysis U.S. Census
What are the most useful sources of economics data? For the US: FRED: Federal Reserve Economic Data (the best) Bureau of Labor Statistics Bureau of Economic Analysis U.S. Census
7,283
What are the most useful sources of economics data?
The World Bank data API is particularly good and I wish that more global and state-level organisations would release this much. Here are a few more to complement @check123: UK government data project; US government data project; Infochimps - massive resource of a wide variety of public and private (commercial) datasources - plus their API; Freebase (now owned by Google) - open data resource; DBpedia - an approach to using the Wikipedia API; Wikipedia API - or go direct and access Wikipedia direct; And the lazy person's choice, there is the CIA World Factbook. I find that the data is sometimes a bit wrong, but it is a useful place to get a rather plentiful overview. This is an exciting area of development so expect many more data resources to come. Follow the Open Data page at Wikipedia for regular updates.
What are the most useful sources of economics data?
The World Bank data API is particularly good and I wish that more global and state-level organisations would release this much. Here are a few more to complement @check123: UK government data projec
What are the most useful sources of economics data? The World Bank data API is particularly good and I wish that more global and state-level organisations would release this much. Here are a few more to complement @check123: UK government data project; US government data project; Infochimps - massive resource of a wide variety of public and private (commercial) datasources - plus their API; Freebase (now owned by Google) - open data resource; DBpedia - an approach to using the Wikipedia API; Wikipedia API - or go direct and access Wikipedia direct; And the lazy person's choice, there is the CIA World Factbook. I find that the data is sometimes a bit wrong, but it is a useful place to get a rather plentiful overview. This is an exciting area of development so expect many more data resources to come. Follow the Open Data page at Wikipedia for regular updates.
What are the most useful sources of economics data? The World Bank data API is particularly good and I wish that more global and state-level organisations would release this much. Here are a few more to complement @check123: UK government data projec
7,284
What are the most useful sources of economics data?
In addition to what you've got already, there's http://www.zanran.com/q/ - a search-engine dedicated to numerical data
What are the most useful sources of economics data?
In addition to what you've got already, there's http://www.zanran.com/q/ - a search-engine dedicated to numerical data
What are the most useful sources of economics data? In addition to what you've got already, there's http://www.zanran.com/q/ - a search-engine dedicated to numerical data
What are the most useful sources of economics data? In addition to what you've got already, there's http://www.zanran.com/q/ - a search-engine dedicated to numerical data
7,285
What are the most useful sources of economics data?
Local/Foreign governments: Data from Finance Ministry and its bodies Reserve Bank Official publication of annual accounts of the country Academic Sources: Research papers and journals Internal archives of universities and institutions Dedicated policy and welfare research centers Theory/Text books often have further reference International Aggregates: World Bank Data United Nations Data IMF Data ADB Data WTO Stats International NGO(s) Print publications from multilateral institutions (like above) Private Sources: Research and surveys by local/national and international NGO(s) Publications and surveys from mass-media (newspapers, news channels, magazines etc) Research and surveys from private organizations (ex - AC Nielsen) Publications and reports from financial organizations like Banks, Credit Ratings etc.
What are the most useful sources of economics data?
Local/Foreign governments: Data from Finance Ministry and its bodies Reserve Bank Official publication of annual accounts of the country Academic Sources: Research papers and journals Internal arch
What are the most useful sources of economics data? Local/Foreign governments: Data from Finance Ministry and its bodies Reserve Bank Official publication of annual accounts of the country Academic Sources: Research papers and journals Internal archives of universities and institutions Dedicated policy and welfare research centers Theory/Text books often have further reference International Aggregates: World Bank Data United Nations Data IMF Data ADB Data WTO Stats International NGO(s) Print publications from multilateral institutions (like above) Private Sources: Research and surveys by local/national and international NGO(s) Publications and surveys from mass-media (newspapers, news channels, magazines etc) Research and surveys from private organizations (ex - AC Nielsen) Publications and reports from financial organizations like Banks, Credit Ratings etc.
What are the most useful sources of economics data? Local/Foreign governments: Data from Finance Ministry and its bodies Reserve Bank Official publication of annual accounts of the country Academic Sources: Research papers and journals Internal arch
7,286
What are the most useful sources of economics data?
The U.S. Census Bureau was one of the first government agencies to put data on the web. I still remember the elation I felt back in 1995 when I found out I could get up to date CPS reports and data online instead of having to go through library shelves. They provide both summary tables and public use microdata. Similarly, U.S. Bureau of Labor Statistics and [U.S. Bureau of Economic Analysis) provide easy online access to both summary and detailed series. BLS's National Longitudinal Surves is used in a lot of empirical micro research. U.S. Bureau of Transportation Statistics has a lot of tables, but some of them are in quite inconvenient formats. E.g., statistics on boating accidents by the U.S. Coast Guard came in PDF files the last time I checked. U.S. Centers for Disease Control have an incredible wealth of data on both diseases and behavioral information. Among them is Behavioral Risk Factor Surveillance System which features prominently in health related research these days. Health & Retirement Study "surveys a representative sample of more than 26,000 Americans over the age of 50 every two years."
What are the most useful sources of economics data?
The U.S. Census Bureau was one of the first government agencies to put data on the web. I still remember the elation I felt back in 1995 when I found out I could get up to date CPS reports and data on
What are the most useful sources of economics data? The U.S. Census Bureau was one of the first government agencies to put data on the web. I still remember the elation I felt back in 1995 when I found out I could get up to date CPS reports and data online instead of having to go through library shelves. They provide both summary tables and public use microdata. Similarly, U.S. Bureau of Labor Statistics and [U.S. Bureau of Economic Analysis) provide easy online access to both summary and detailed series. BLS's National Longitudinal Surves is used in a lot of empirical micro research. U.S. Bureau of Transportation Statistics has a lot of tables, but some of them are in quite inconvenient formats. E.g., statistics on boating accidents by the U.S. Coast Guard came in PDF files the last time I checked. U.S. Centers for Disease Control have an incredible wealth of data on both diseases and behavioral information. Among them is Behavioral Risk Factor Surveillance System which features prominently in health related research these days. Health & Retirement Study "surveys a representative sample of more than 26,000 Americans over the age of 50 every two years."
What are the most useful sources of economics data? The U.S. Census Bureau was one of the first government agencies to put data on the web. I still remember the elation I felt back in 1995 when I found out I could get up to date CPS reports and data on
7,287
What are the most useful sources of economics data?
Rescued from a deleted answer: If you are interested in the European Union or in some of its member states, you can have a look at Eurostat's databases.
What are the most useful sources of economics data?
Rescued from a deleted answer: If you are interested in the European Union or in some of its member states, you can have a look at Eurostat's databases.
What are the most useful sources of economics data? Rescued from a deleted answer: If you are interested in the European Union or in some of its member states, you can have a look at Eurostat's databases.
What are the most useful sources of economics data? Rescued from a deleted answer: If you are interested in the European Union or in some of its member states, you can have a look at Eurostat's databases.
7,288
What are the most useful sources of economics data?
Don't forget http://www.icpsr.umich.edu/
What are the most useful sources of economics data?
Don't forget http://www.icpsr.umich.edu/
What are the most useful sources of economics data? Don't forget http://www.icpsr.umich.edu/
What are the most useful sources of economics data? Don't forget http://www.icpsr.umich.edu/
7,289
What are the most useful sources of economics data?
For macroeconomic and financial data, Quandl is a great resource, because it effectively acts as a wrapper around many of the excellent sources mentioned here, and many others. What is more library(Quandl) makes accessing the data in R gratifyingly simple.
What are the most useful sources of economics data?
For macroeconomic and financial data, Quandl is a great resource, because it effectively acts as a wrapper around many of the excellent sources mentioned here, and many others. What is more library(Qu
What are the most useful sources of economics data? For macroeconomic and financial data, Quandl is a great resource, because it effectively acts as a wrapper around many of the excellent sources mentioned here, and many others. What is more library(Quandl) makes accessing the data in R gratifyingly simple.
What are the most useful sources of economics data? For macroeconomic and financial data, Quandl is a great resource, because it effectively acts as a wrapper around many of the excellent sources mentioned here, and many others. What is more library(Qu
7,290
What are the most useful sources of economics data?
If you're looking for free monthly global economic indicators to download, have a look at the database on the blog www.morethanbrics.com/blog. They publish a monthly database for up to 169 countries since 1995. I like it because you can download the whole excel file for free and it's updated on a monthly basis. It's based on Worldbank data and includes, among others, the following: Real GDP Growth CPI Core CPI Industrial Production Retail Sales Imports Exports Foreign Exchange Reserves Terms of Trade M2 Multiplier
What are the most useful sources of economics data?
If you're looking for free monthly global economic indicators to download, have a look at the database on the blog www.morethanbrics.com/blog. They publish a monthly database for up to 169 countries s
What are the most useful sources of economics data? If you're looking for free monthly global economic indicators to download, have a look at the database on the blog www.morethanbrics.com/blog. They publish a monthly database for up to 169 countries since 1995. I like it because you can download the whole excel file for free and it's updated on a monthly basis. It's based on Worldbank data and includes, among others, the following: Real GDP Growth CPI Core CPI Industrial Production Retail Sales Imports Exports Foreign Exchange Reserves Terms of Trade M2 Multiplier
What are the most useful sources of economics data? If you're looking for free monthly global economic indicators to download, have a look at the database on the blog www.morethanbrics.com/blog. They publish a monthly database for up to 169 countries s
7,291
What are the most useful sources of economics data?
There are very related questions in the Economics and quant stacks: https://economics.stackexchange.com/questions/4679/what-are-some-good-repositories-for-economic-data https://quant.stackexchange.com/questions/141/what-data-sources-are-available-online Answers from there: The American Economic Association has a list of resources for Economists, including a page for data, there you find links to many institutions that offer all kinds of data, as well as further journals with data archives for the studies they publish. In the ReplicationWiki (that I work on) we have information on more than 2000 empirical studies and you can search for which one what kind of data, software, and methods were used, if the material is available, and if replications are known. Many studies can be browsed by JEL codes or keywords. The categorization of data sources and geographical origin of data remains very incomplete but it is a wiki, so everyone can contribute and make suggestions.
What are the most useful sources of economics data?
There are very related questions in the Economics and quant stacks: https://economics.stackexchange.com/questions/4679/what-are-some-good-repositories-for-economic-data https://quant.stackexchange.com
What are the most useful sources of economics data? There are very related questions in the Economics and quant stacks: https://economics.stackexchange.com/questions/4679/what-are-some-good-repositories-for-economic-data https://quant.stackexchange.com/questions/141/what-data-sources-are-available-online Answers from there: The American Economic Association has a list of resources for Economists, including a page for data, there you find links to many institutions that offer all kinds of data, as well as further journals with data archives for the studies they publish. In the ReplicationWiki (that I work on) we have information on more than 2000 empirical studies and you can search for which one what kind of data, software, and methods were used, if the material is available, and if replications are known. Many studies can be browsed by JEL codes or keywords. The categorization of data sources and geographical origin of data remains very incomplete but it is a wiki, so everyone can contribute and make suggestions.
What are the most useful sources of economics data? There are very related questions in the Economics and quant stacks: https://economics.stackexchange.com/questions/4679/what-are-some-good-repositories-for-economic-data https://quant.stackexchange.com
7,292
What is the difference between Conv1D and Conv2D?
I'd like to explain the difference visually and in detail(comments in code) and in a very easy approach. Let's first check the Conv2D in TensorFlow. c1 = [[0, 0, 1, 0, 2], [1, 0, 2, 0, 1], [1, 0, 2, 2, 0], [2, 0, 0, 2, 0], [2, 1, 2, 2, 0]] c2 = [[2, 1, 2, 1, 1], [2, 1, 2, 0, 1], [0, 2, 1, 0, 1], [1, 2, 2, 2, 2], [0, 1, 2, 0, 1]] c3 = [[2, 1, 1, 2, 0], [1, 0, 0, 1, 0], [0, 1, 0, 0, 0], [1, 0, 2, 1, 0], [2, 2, 1, 1, 1]] data = tf.transpose(tf.constant([[c1, c2, c3]], dtype=tf.float32), (0, 2, 3, 1)) # we transfer [batch, in_channels, in_height, in_width] to [batch, in_height, in_width, in_channels] # where batch = 1, in_channels = 3 (c1, c2, c3 or x[:, :, 0], x[:, :, 1], x[:, :, 2] in the gif), in_height and in_width are all 5(the sizes of the blue matrices without padding) f2c1 = [[0, 1, -1], [0, -1, 0], [0, -1, 1]] f2c2 = [[-1, 0, 0], [1, -1, 0], [1, -1, 0]] f2c3 = [[-1, 1, -1], [0, -1, -1], [1, 0, 0]] filters = tf.transpose(tf.constant([[f2c1, f2c2, f2c3]], dtype=tf.float32), (2, 3, 1, 0)) # transfer the [out_channels, in_channels, filter_height, filter_width] to [filter_height, filter_width, in_channels, out_channels] # out_channels is 1(in the gif it is 2 since here we only use one filter W1), in_channels is 3 because data has three channels(c1, c2, c3), filter_height and filter_width are all 3(the sizes of the filter W1) # f2c1, f2c2, f2c3 are the w1[:, :, 0], w1[:, :, 1] and w1[:, :, 2] in the gif output = tf.squeeze(tf.nn.conv2d(data, filters, strides=2, padding=[[0, 0], [1, 1], [1, 1], [0, 0]])) # this is just the o[:,:,1] in the gif # <tf.Tensor: id=93, shape=(3, 3), dtype=float32, numpy= # array([[-8., -8., -3.], # [-3., 1., 0.], # [-3., -8., -5.]], dtype=float32)> And the Conv1D is a special case of Conv2D as stated in this paragraph from the TensorFlow doc of Conv1D. Internally, this op reshapes the input tensors and invokes tf.nn.conv2d. For example, if data_format does not start with "NC", a tensor of shape [batch, in_width, in_channels] is reshaped to [batch, 1, in_width, in_channels], and the filter is reshaped to [1, filter_width, in_channels, out_channels]. The result is then reshaped back to [batch, out_width, out_channels] (where out_width is a function of the stride and padding as in conv2d) and returned to the caller. Let's see how we can transfer Conv1D to a Conv2D problem. Since Conv1D is usually used in NLP scenarios, we can illustrate that in the below NLP problem. cat = [0.7, 0.4, 0.5] sitting = [0.2, -0.1, 0.1] there = [-0.5, 0.4, 0.1] dog = [0.6, 0.3, 0.5] resting = [0.3, -0.1, 0.2] here = [-0.5, 0.4, 0.1] sentence = tf.constant([[cat, sitting, there, dog, resting, here]] # sentence[:,:,0] is equivalent to x[:,:,0] or c1 in the first example and the same for sentence[:,:,1] and sentence[:,:,2] data = tf.reshape(sentence), (1, 1, 6, 3)) # we reshape [batch, in_width, in_channels] to [batch, 1, in_width, in_channels] according to the quote above # each dimension in the embedding is a channel(three in_channels) f3c1 = [0.6, 0.2] # equivalent to f2c1 in the first code snippet or w1[:,:,0] in the gif f3c2 = [0.4, -0.1] # equivalent to f2c2 in the first code snippet or w1[:,:,1] in the gif f3c3 = [0.5, 0.2] # equivalent to f2c3 in the first code snippet or w1[:,:,2] in the gif # filters = tf.constant([[f3c1, f3c2, f3c3]]) # [out_channels, in_channels, filter_width]: [1, 3, 2] # here we also have only one filter and also three channels in it. Please compare these three with the three channels in W1 for the Conv2D in the gif filter1D = tf.transpose(tf.constant([[f3c1, f3c2, f3c3]]), (2, 1, 0)) # shape: [2, 3, 1] for the conv1d example filters = tf.reshape(filter1D, (1, 2, 3, 1)) # this should be expand_dim actually # transpose [out_channels, in_channels, filter_width] to [filter_width, in_channels, out_channels]] and then reshape the result to [1, filter_width, in_channels, out_channels] as we described in the text snippet from Tensorflow doc of conv1doutput output = tf.squeeze(tf.nn.conv2d(data, filters, strides=(1, 1, 2, 1), padding="VALID")) # the numbers for strides are for [batch, 1, in_width, in_channels] of the data input # <tf.Tensor: id=119, shape=(3,), dtype=float32, numpy=array([0.9 , 0.09999999, 0.12 ], dtype=float32)> Let's do that using Conv1D(also in TensorFlow): output = tf.squeeze(tf.nn.conv1d(sentence, filter1D, stride=2, padding="VALID")) # <tf.Tensor: id=135, shape=(3,), dtype=float32, numpy=array([0.9 , 0.09999999, 0.12 ], dtype=float32)> # here stride defaults to be for the in_width We can see that the 2D in Conv2D means each channel in the input and filter is 2 dimensional(as we see in the gif example) and 1D in Conv1D means each channel in the input and filter is 1 dimensional(as we see in the cat and dog NLP example).
What is the difference between Conv1D and Conv2D?
I'd like to explain the difference visually and in detail(comments in code) and in a very easy approach. Let's first check the Conv2D in TensorFlow. c1 = [[0, 0, 1, 0, 2], [1, 0, 2, 0, 1], [1, 0, 2,
What is the difference between Conv1D and Conv2D? I'd like to explain the difference visually and in detail(comments in code) and in a very easy approach. Let's first check the Conv2D in TensorFlow. c1 = [[0, 0, 1, 0, 2], [1, 0, 2, 0, 1], [1, 0, 2, 2, 0], [2, 0, 0, 2, 0], [2, 1, 2, 2, 0]] c2 = [[2, 1, 2, 1, 1], [2, 1, 2, 0, 1], [0, 2, 1, 0, 1], [1, 2, 2, 2, 2], [0, 1, 2, 0, 1]] c3 = [[2, 1, 1, 2, 0], [1, 0, 0, 1, 0], [0, 1, 0, 0, 0], [1, 0, 2, 1, 0], [2, 2, 1, 1, 1]] data = tf.transpose(tf.constant([[c1, c2, c3]], dtype=tf.float32), (0, 2, 3, 1)) # we transfer [batch, in_channels, in_height, in_width] to [batch, in_height, in_width, in_channels] # where batch = 1, in_channels = 3 (c1, c2, c3 or x[:, :, 0], x[:, :, 1], x[:, :, 2] in the gif), in_height and in_width are all 5(the sizes of the blue matrices without padding) f2c1 = [[0, 1, -1], [0, -1, 0], [0, -1, 1]] f2c2 = [[-1, 0, 0], [1, -1, 0], [1, -1, 0]] f2c3 = [[-1, 1, -1], [0, -1, -1], [1, 0, 0]] filters = tf.transpose(tf.constant([[f2c1, f2c2, f2c3]], dtype=tf.float32), (2, 3, 1, 0)) # transfer the [out_channels, in_channels, filter_height, filter_width] to [filter_height, filter_width, in_channels, out_channels] # out_channels is 1(in the gif it is 2 since here we only use one filter W1), in_channels is 3 because data has three channels(c1, c2, c3), filter_height and filter_width are all 3(the sizes of the filter W1) # f2c1, f2c2, f2c3 are the w1[:, :, 0], w1[:, :, 1] and w1[:, :, 2] in the gif output = tf.squeeze(tf.nn.conv2d(data, filters, strides=2, padding=[[0, 0], [1, 1], [1, 1], [0, 0]])) # this is just the o[:,:,1] in the gif # <tf.Tensor: id=93, shape=(3, 3), dtype=float32, numpy= # array([[-8., -8., -3.], # [-3., 1., 0.], # [-3., -8., -5.]], dtype=float32)> And the Conv1D is a special case of Conv2D as stated in this paragraph from the TensorFlow doc of Conv1D. Internally, this op reshapes the input tensors and invokes tf.nn.conv2d. For example, if data_format does not start with "NC", a tensor of shape [batch, in_width, in_channels] is reshaped to [batch, 1, in_width, in_channels], and the filter is reshaped to [1, filter_width, in_channels, out_channels]. The result is then reshaped back to [batch, out_width, out_channels] (where out_width is a function of the stride and padding as in conv2d) and returned to the caller. Let's see how we can transfer Conv1D to a Conv2D problem. Since Conv1D is usually used in NLP scenarios, we can illustrate that in the below NLP problem. cat = [0.7, 0.4, 0.5] sitting = [0.2, -0.1, 0.1] there = [-0.5, 0.4, 0.1] dog = [0.6, 0.3, 0.5] resting = [0.3, -0.1, 0.2] here = [-0.5, 0.4, 0.1] sentence = tf.constant([[cat, sitting, there, dog, resting, here]] # sentence[:,:,0] is equivalent to x[:,:,0] or c1 in the first example and the same for sentence[:,:,1] and sentence[:,:,2] data = tf.reshape(sentence), (1, 1, 6, 3)) # we reshape [batch, in_width, in_channels] to [batch, 1, in_width, in_channels] according to the quote above # each dimension in the embedding is a channel(three in_channels) f3c1 = [0.6, 0.2] # equivalent to f2c1 in the first code snippet or w1[:,:,0] in the gif f3c2 = [0.4, -0.1] # equivalent to f2c2 in the first code snippet or w1[:,:,1] in the gif f3c3 = [0.5, 0.2] # equivalent to f2c3 in the first code snippet or w1[:,:,2] in the gif # filters = tf.constant([[f3c1, f3c2, f3c3]]) # [out_channels, in_channels, filter_width]: [1, 3, 2] # here we also have only one filter and also three channels in it. Please compare these three with the three channels in W1 for the Conv2D in the gif filter1D = tf.transpose(tf.constant([[f3c1, f3c2, f3c3]]), (2, 1, 0)) # shape: [2, 3, 1] for the conv1d example filters = tf.reshape(filter1D, (1, 2, 3, 1)) # this should be expand_dim actually # transpose [out_channels, in_channels, filter_width] to [filter_width, in_channels, out_channels]] and then reshape the result to [1, filter_width, in_channels, out_channels] as we described in the text snippet from Tensorflow doc of conv1doutput output = tf.squeeze(tf.nn.conv2d(data, filters, strides=(1, 1, 2, 1), padding="VALID")) # the numbers for strides are for [batch, 1, in_width, in_channels] of the data input # <tf.Tensor: id=119, shape=(3,), dtype=float32, numpy=array([0.9 , 0.09999999, 0.12 ], dtype=float32)> Let's do that using Conv1D(also in TensorFlow): output = tf.squeeze(tf.nn.conv1d(sentence, filter1D, stride=2, padding="VALID")) # <tf.Tensor: id=135, shape=(3,), dtype=float32, numpy=array([0.9 , 0.09999999, 0.12 ], dtype=float32)> # here stride defaults to be for the in_width We can see that the 2D in Conv2D means each channel in the input and filter is 2 dimensional(as we see in the gif example) and 1D in Conv1D means each channel in the input and filter is 1 dimensional(as we see in the cat and dog NLP example).
What is the difference between Conv1D and Conv2D? I'd like to explain the difference visually and in detail(comments in code) and in a very easy approach. Let's first check the Conv2D in TensorFlow. c1 = [[0, 0, 1, 0, 2], [1, 0, 2, 0, 1], [1, 0, 2,
7,293
What is the difference between Conv1D and Conv2D?
Convolution is a mathematical operation where you "summarize" a tensor or a matrix or a vector into a smaller one. If your input matrix is one dimensional then you summarize along that on dimensions, and if a tensor has n dimensions then you could summarize along all n dimensions. Conv1D and Conv2D summarize (convolve) along one or two dimensions. For instance, you could convolve a vector into a shorter vector as followss. Get a "long" vector A with n elements and convolve it using the a weight vector W with m elements into a "short" (summary) vector B with n-m+1 elements: $$b_i=\sum_{j=m-1}^0 a_{i+j}*w_j$$ where $i=[1,n-m+1]$ So, if you have vector of length n, and your weight matrix is also length n $w_i=1/n$, then the convolution will produce a scalar or a vector of length 1 equal to the average value of all values in the input matrix. It's a sort of degenerate convolution if you wish. If the same weight matrix is one shorter than the input matrix, then you get a moving average in the output of length 2 etc. $$\begin{bmatrix} a:&a_1 & a_2 & a_3\\ w:&1/2 & 1/2&\\ w:&&1/2 & 1/2\\ \end{bmatrix}=\begin{bmatrix} b:&\frac{a_1+a_2} 2 & \frac{a_2+a_3} 2 \end{bmatrix} $$ You could do the same to 3 dimensional tensor (matrix) the same way: $$b_{ikl}=\sum_{j_1=m_1-1\\j_2=m_2-1\\j_3=m_4-1}^{0} a_{i+j_1,k+j_2,l+j_3}*w_{j_1j_2j_3}$$ where $i=[1,n_1-m_1+1],k=[1,n_2-m_2+1],l=[1,n_3-m_3+1]$
What is the difference between Conv1D and Conv2D?
Convolution is a mathematical operation where you "summarize" a tensor or a matrix or a vector into a smaller one. If your input matrix is one dimensional then you summarize along that on dimensions,
What is the difference between Conv1D and Conv2D? Convolution is a mathematical operation where you "summarize" a tensor or a matrix or a vector into a smaller one. If your input matrix is one dimensional then you summarize along that on dimensions, and if a tensor has n dimensions then you could summarize along all n dimensions. Conv1D and Conv2D summarize (convolve) along one or two dimensions. For instance, you could convolve a vector into a shorter vector as followss. Get a "long" vector A with n elements and convolve it using the a weight vector W with m elements into a "short" (summary) vector B with n-m+1 elements: $$b_i=\sum_{j=m-1}^0 a_{i+j}*w_j$$ where $i=[1,n-m+1]$ So, if you have vector of length n, and your weight matrix is also length n $w_i=1/n$, then the convolution will produce a scalar or a vector of length 1 equal to the average value of all values in the input matrix. It's a sort of degenerate convolution if you wish. If the same weight matrix is one shorter than the input matrix, then you get a moving average in the output of length 2 etc. $$\begin{bmatrix} a:&a_1 & a_2 & a_3\\ w:&1/2 & 1/2&\\ w:&&1/2 & 1/2\\ \end{bmatrix}=\begin{bmatrix} b:&\frac{a_1+a_2} 2 & \frac{a_2+a_3} 2 \end{bmatrix} $$ You could do the same to 3 dimensional tensor (matrix) the same way: $$b_{ikl}=\sum_{j_1=m_1-1\\j_2=m_2-1\\j_3=m_4-1}^{0} a_{i+j_1,k+j_2,l+j_3}*w_{j_1j_2j_3}$$ where $i=[1,n_1-m_1+1],k=[1,n_2-m_2+1],l=[1,n_3-m_3+1]$
What is the difference between Conv1D and Conv2D? Convolution is a mathematical operation where you "summarize" a tensor or a matrix or a vector into a smaller one. If your input matrix is one dimensional then you summarize along that on dimensions,
7,294
What is the difference between Conv1D and Conv2D?
I will be using a Pytorch perspective, however, the logic remains the same. When using Conv1d(), we have to keep in mind that we are most likely going to work with 2-dimensional inputs such as one-hot-encode DNA sequences or black and white pictures. The only difference between the more conventional Conv2d() and Conv1d() is that latter uses a 1-dimensional kernel as shown in the picture below. In here, the height of your input data becomes the “depth” (or in_channels), and our rows become the kernel size. For example, import torch import torch.nn as nn tensor = torch.randn(1,100,4) output = nn.Conv1d(in_channels =100,out_channels=1,kernel_size=1,stride=1)(tensor) #output.shape == [1,1,4] We can see that the kernel automatically spans to the height of the picture (just as in Conv2d() the depth of the kernel automatically spans the image’s channels) and therefore all we are left to give is the kernel size with respect to the span of the rows. We just have to remember that if we are assuming a 2-dimensional input, our filters become our columns and our rows become the kernel size.
What is the difference between Conv1D and Conv2D?
I will be using a Pytorch perspective, however, the logic remains the same. When using Conv1d(), we have to keep in mind that we are most likely going to work with 2-dimensional inputs such as one-hot
What is the difference between Conv1D and Conv2D? I will be using a Pytorch perspective, however, the logic remains the same. When using Conv1d(), we have to keep in mind that we are most likely going to work with 2-dimensional inputs such as one-hot-encode DNA sequences or black and white pictures. The only difference between the more conventional Conv2d() and Conv1d() is that latter uses a 1-dimensional kernel as shown in the picture below. In here, the height of your input data becomes the “depth” (or in_channels), and our rows become the kernel size. For example, import torch import torch.nn as nn tensor = torch.randn(1,100,4) output = nn.Conv1d(in_channels =100,out_channels=1,kernel_size=1,stride=1)(tensor) #output.shape == [1,1,4] We can see that the kernel automatically spans to the height of the picture (just as in Conv2d() the depth of the kernel automatically spans the image’s channels) and therefore all we are left to give is the kernel size with respect to the span of the rows. We just have to remember that if we are assuming a 2-dimensional input, our filters become our columns and our rows become the kernel size.
What is the difference between Conv1D and Conv2D? I will be using a Pytorch perspective, however, the logic remains the same. When using Conv1d(), we have to keep in mind that we are most likely going to work with 2-dimensional inputs such as one-hot
7,295
What is the difference between Conv1D and Conv2D?
In summary, In 1D CNN, kernel moves in 1 direction. Input and output data of 1D CNN is 2 dimensional. Mostly used on Time-Series data. In 2D CNN, kernel moves in 2 directions. Input and output data of 2D CNN is 3 dimensional. Mostly used on Image data. In 3D CNN, kernel moves in 3 directions. Input and output data of 3D CNN is 4 dimensional. Mostly used on 3D Image data (MRI, CT Scans). You can find more details here: https://medium.com/@xzz201920/conv1d-conv2d-and-conv3d-8a59182c4d6
What is the difference between Conv1D and Conv2D?
In summary, In 1D CNN, kernel moves in 1 direction. Input and output data of 1D CNN is 2 dimensional. Mostly used on Time-Series data. In 2D CNN, kernel moves in 2 directions. Input and output data of
What is the difference between Conv1D and Conv2D? In summary, In 1D CNN, kernel moves in 1 direction. Input and output data of 1D CNN is 2 dimensional. Mostly used on Time-Series data. In 2D CNN, kernel moves in 2 directions. Input and output data of 2D CNN is 3 dimensional. Mostly used on Image data. In 3D CNN, kernel moves in 3 directions. Input and output data of 3D CNN is 4 dimensional. Mostly used on 3D Image data (MRI, CT Scans). You can find more details here: https://medium.com/@xzz201920/conv1d-conv2d-and-conv3d-8a59182c4d6
What is the difference between Conv1D and Conv2D? In summary, In 1D CNN, kernel moves in 1 direction. Input and output data of 1D CNN is 2 dimensional. Mostly used on Time-Series data. In 2D CNN, kernel moves in 2 directions. Input and output data of
7,296
What is the difference between Conv1D and Conv2D?
This 1d convolution is cost saver, it work in the same way but assume a 1 dimension array that makes a multiplication with the elements. If you want to visualize think of a matrix of either row or columns i.e a single dimension when we multiplies we get an array of same shape but of lower or higher values, thus it helps in maximizing or minimizing the intensity of values. This image might help you, For details refer, https://www.youtube.com/watch?v=qVP574skyuM
What is the difference between Conv1D and Conv2D?
This 1d convolution is cost saver, it work in the same way but assume a 1 dimension array that makes a multiplication with the elements. If you want to visualize think of a matrix of either row or col
What is the difference between Conv1D and Conv2D? This 1d convolution is cost saver, it work in the same way but assume a 1 dimension array that makes a multiplication with the elements. If you want to visualize think of a matrix of either row or columns i.e a single dimension when we multiplies we get an array of same shape but of lower or higher values, thus it helps in maximizing or minimizing the intensity of values. This image might help you, For details refer, https://www.youtube.com/watch?v=qVP574skyuM
What is the difference between Conv1D and Conv2D? This 1d convolution is cost saver, it work in the same way but assume a 1 dimension array that makes a multiplication with the elements. If you want to visualize think of a matrix of either row or col
7,297
Origin of "5$\sigma$" threshold for accepting evidence in particle physics?
History and origin According to Robert D Cousins$^{1}$ and Tommaso Dorigo$^{2}$, the origin of the $5\sigma$ threshold origin lies in the early particle physics work of the 60s when numerous histograms of scattering experiments were investigated and searched for peaks/bumps that might indicate some newly discovered particle. The threshold is a rough rule to account for the multiple comparisons that are being made. Both authors refer to a 1968 article from Rosenfeld$^3$, which dealt with the question whether or not there are far out mesons and baryons, for which several $4 \sigma$ effects where measured. The article answered the question negatively by arguing that the number of published claims corresponds to the statistically expected number of fluctuations. Along with several calculations supporting this argument the article promoted the use of the $5\sigma$ level: Rosenfeld: "Before we go on to survey far-out mass spectra where bumps have been reported in $(K\pi\pi)_{3/2},(\pi \rho)^{--}$ we should first decide what threshold of significance to demand in 1968. I want to show you that although experimentalists should probably note $3\sigma$-effects, theoreticians and phenomenologists would do better to wait till the effect reaches $>4\sigma$." and later in the paper (emphasis is mine) Rosenfeld: "Then to repeat my warning at the beginning of this section; we are generating at least 100 000 potential bumps per year, and should expect several $4\sigma$ and hundreds of $3\sigma$ fluctuations. What are the implications? To the theoretician or phenomenologist the moral is simple; wait for $5\sigma$ effects." Tommaso seems to be careful in stating that it started with the Rosenfeld article Tommaso: "However, we should note that the article was written in 1968, but the strict criterion of five standard deviations for discovery claims was not adopted in the seventies and eighties. For instance, no such thing as a five-sigma criterion was used for the discovery of the W and Z bosons, which earned Rubbia and Van der Meer the Nobel Prize in physics in 1984." But in the 80s the use of $5\sigma$ was spread out. For instance, the astronomer Steve Schneider$^4$ mentions in 1989 that it is something being taught (emphasize mine in the quote below): Schneider: "Frequently, 'levels of confidence' of 95% or 99% are quoted for apparently discrepant data, but this amounts to only two or three statistical sigmas. I was taught not to believe anything less than five sigma, which if you think about it is an absurdly stringent requirement --- something like a 99.9999% confidence level. But of course, such a limit is used because the actual size of sigma is almost never known. There are just too many free variables in astronomy that we can't control or don't know about." Yet, in the field of particle physics many publications where still based on $4\sigma$ discrepancies up till the late 90s. This only changed into $5\sigma$ at the beginnning of the 21th century. It is probably prescribed as a guidline for publications around 2003 (see the prologue in Franklin's book Shifting Standards$^5$) Franklin: By 2003 the 5-standard-deviation criterion for "observation of" seems to have been in effect ... A member of the BaBar collaboration recalls that about this time the 5-sigma criterion was issued as a guideline by the editors of the Physical Review Letters Modern use Currently, the $5\sigma$ threshold is a textbook standard. For instance, it occurs as a standard article on physics.org$^6$ or in some of Glen Cowan's works, such as the statistics section of the Review of Particle Physics from the particle data group$^7$ (albeit with several critical sidenotes) Glen Cowan: Often in HEP, the level of significance where an effect is said to qualify as a discovery is $Z = 5$, i.e., a $5\sigma$ effect, corresponding to a p-value of $2.87 \times 10^{−7}$ . One’s actual degree of belief that a new process is present, however, will depend in general on other factors as well, such as the plausibility of the new signal hypothesis and the degree to which it can describe the data, one’s confidence in the model that led to the observed p-value, and possible corrections for multiple observations out of which one focuses on the smallest p-value obtained (the “look-elsewhere effect”). The use of the $5\sigma$ level is now ascribed to 4 reasons: History based on practice one found that $5\sigma$ is a good threshold. (exotic stuff seems to happen randomly, even between $3\sigma$ to $4\sigma$, like recently the 750 GeV diphoton excess) The look elsewhere effect (or the multiple comparisons). Either because multiple hypotheses are tested, or because experiments are performed many times, people adjust for this (very roughly) by adjusting the bound to $5\sigma$. This relates to the history argument. Systematic effects and uncertainty in $\sigma$ often the uncertainty of the experiment outcome is not well known. The $\sigma$ is derived, but the derivation includes weak assumptions such as the absence of systematic effects, or the possibility to ignore them. Increasing the threshold seems to be a way to sort of a protect against these events. (This is a bit strange though. The computed $\sigma$ has no relation to the size of systematic effects and the logic breaks down, an example is the "discovery" of superluminal neutrino's which was reported to be having a $6\sigma$ significance.) Extraordinary claims require extraordinary evidence Scientific results are reported in a frequentist way, for instance using confidence intervals or p-values. But, they are often interpreted in a Bayesian way. The $5\sigma$ level is claimed to account for this. Currently several criticisms have been written about the $5\sigma$ threshold by Louis Lyons${^{8,}}$$^9$, and also the earlier mentioned articles by Robert D Cousins$^{1}$ and Tommaso Dorigo$^{2}$ provide critique. Other Fields It is interesting to note that many other scientific fields do not have similar thresholds or do not, somehow, deal with the issue. I imagine this makes a bit sense in the case of experiments with humans where it is very costly (or impossible) to extend an experiment that gave a .05 or .01 significance. The result of not taking these effects into account is that over half of the published results may be wrong or at least are not reproducible (This has been argued for the case of psychology by Monya Baker $^{10}$, and I believe there are many others that made similar arguments. I personaly think that the situation may be even worse in nutritional science). And now, people from other fields than physics are thinking about how they should deal with this issue (the case of medicine/pharmacology$^{11}$). Cousins, R. D. (2017). The Jeffreys–Lindley paradox and discovery criteria in high energy physics. Synthese, 194(2), 395-432. arxiv link Dorigo, T. (2013) Demystifying The Five-Sigma Criterion, from science20.com 2019-03-07 Rosenfeld, A. H. (1968). Are there any far-out mesons or baryons? web-source: escholarship Burbidge, G., Roberts, M., Schneider, S., Sharp, N., & Tifft, W. (1990, November). Panel discussion: Redshift related problems. In NASA Conference Publication (Vol. 3098, p. 462). link to photocopy on harvard.edu Franklin, A. (2013). Shifting standards: Experiments in particle physics in the twentieth century. University of Pittsburgh Press. What does the 5 sigma mean? from physics.org 2019-03-07 Beringer, J., Arguin, J. F., Barnett, R. M., Copic, K., Dahl, O., Groom, D. E., ... & Yao, W. M. (2012). Review of particle physics. Physical Review D-Particles, Fields, Gravitation and Cosmology, 86(1), 010001. (section 36.2.2. Significance tests, page 394, link aps.org ) Lyons, L. (2013). Discovering the Significance of 5 sigma. arXiv preprint arXiv:1310.1284. arxiv link Lyons, L. (2014). Statistical Issues in Searches for New Physics. arXiv preprint arxiv link Baker, M. (2015). Over half of psychology studies fail reproducibility test. Nature News. from nature.com 2019-03-07 Horton, R. (2015). Offline: what is medicine's 5 sigma?. The Lancet, 385(9976), 1380. from thelancet.com 2019-03-07
Origin of "5$\sigma$" threshold for accepting evidence in particle physics?
History and origin According to Robert D Cousins$^{1}$ and Tommaso Dorigo$^{2}$, the origin of the $5\sigma$ threshold origin lies in the early particle physics work of the 60s when numerous histogram
Origin of "5$\sigma$" threshold for accepting evidence in particle physics? History and origin According to Robert D Cousins$^{1}$ and Tommaso Dorigo$^{2}$, the origin of the $5\sigma$ threshold origin lies in the early particle physics work of the 60s when numerous histograms of scattering experiments were investigated and searched for peaks/bumps that might indicate some newly discovered particle. The threshold is a rough rule to account for the multiple comparisons that are being made. Both authors refer to a 1968 article from Rosenfeld$^3$, which dealt with the question whether or not there are far out mesons and baryons, for which several $4 \sigma$ effects where measured. The article answered the question negatively by arguing that the number of published claims corresponds to the statistically expected number of fluctuations. Along with several calculations supporting this argument the article promoted the use of the $5\sigma$ level: Rosenfeld: "Before we go on to survey far-out mass spectra where bumps have been reported in $(K\pi\pi)_{3/2},(\pi \rho)^{--}$ we should first decide what threshold of significance to demand in 1968. I want to show you that although experimentalists should probably note $3\sigma$-effects, theoreticians and phenomenologists would do better to wait till the effect reaches $>4\sigma$." and later in the paper (emphasis is mine) Rosenfeld: "Then to repeat my warning at the beginning of this section; we are generating at least 100 000 potential bumps per year, and should expect several $4\sigma$ and hundreds of $3\sigma$ fluctuations. What are the implications? To the theoretician or phenomenologist the moral is simple; wait for $5\sigma$ effects." Tommaso seems to be careful in stating that it started with the Rosenfeld article Tommaso: "However, we should note that the article was written in 1968, but the strict criterion of five standard deviations for discovery claims was not adopted in the seventies and eighties. For instance, no such thing as a five-sigma criterion was used for the discovery of the W and Z bosons, which earned Rubbia and Van der Meer the Nobel Prize in physics in 1984." But in the 80s the use of $5\sigma$ was spread out. For instance, the astronomer Steve Schneider$^4$ mentions in 1989 that it is something being taught (emphasize mine in the quote below): Schneider: "Frequently, 'levels of confidence' of 95% or 99% are quoted for apparently discrepant data, but this amounts to only two or three statistical sigmas. I was taught not to believe anything less than five sigma, which if you think about it is an absurdly stringent requirement --- something like a 99.9999% confidence level. But of course, such a limit is used because the actual size of sigma is almost never known. There are just too many free variables in astronomy that we can't control or don't know about." Yet, in the field of particle physics many publications where still based on $4\sigma$ discrepancies up till the late 90s. This only changed into $5\sigma$ at the beginnning of the 21th century. It is probably prescribed as a guidline for publications around 2003 (see the prologue in Franklin's book Shifting Standards$^5$) Franklin: By 2003 the 5-standard-deviation criterion for "observation of" seems to have been in effect ... A member of the BaBar collaboration recalls that about this time the 5-sigma criterion was issued as a guideline by the editors of the Physical Review Letters Modern use Currently, the $5\sigma$ threshold is a textbook standard. For instance, it occurs as a standard article on physics.org$^6$ or in some of Glen Cowan's works, such as the statistics section of the Review of Particle Physics from the particle data group$^7$ (albeit with several critical sidenotes) Glen Cowan: Often in HEP, the level of significance where an effect is said to qualify as a discovery is $Z = 5$, i.e., a $5\sigma$ effect, corresponding to a p-value of $2.87 \times 10^{−7}$ . One’s actual degree of belief that a new process is present, however, will depend in general on other factors as well, such as the plausibility of the new signal hypothesis and the degree to which it can describe the data, one’s confidence in the model that led to the observed p-value, and possible corrections for multiple observations out of which one focuses on the smallest p-value obtained (the “look-elsewhere effect”). The use of the $5\sigma$ level is now ascribed to 4 reasons: History based on practice one found that $5\sigma$ is a good threshold. (exotic stuff seems to happen randomly, even between $3\sigma$ to $4\sigma$, like recently the 750 GeV diphoton excess) The look elsewhere effect (or the multiple comparisons). Either because multiple hypotheses are tested, or because experiments are performed many times, people adjust for this (very roughly) by adjusting the bound to $5\sigma$. This relates to the history argument. Systematic effects and uncertainty in $\sigma$ often the uncertainty of the experiment outcome is not well known. The $\sigma$ is derived, but the derivation includes weak assumptions such as the absence of systematic effects, or the possibility to ignore them. Increasing the threshold seems to be a way to sort of a protect against these events. (This is a bit strange though. The computed $\sigma$ has no relation to the size of systematic effects and the logic breaks down, an example is the "discovery" of superluminal neutrino's which was reported to be having a $6\sigma$ significance.) Extraordinary claims require extraordinary evidence Scientific results are reported in a frequentist way, for instance using confidence intervals or p-values. But, they are often interpreted in a Bayesian way. The $5\sigma$ level is claimed to account for this. Currently several criticisms have been written about the $5\sigma$ threshold by Louis Lyons${^{8,}}$$^9$, and also the earlier mentioned articles by Robert D Cousins$^{1}$ and Tommaso Dorigo$^{2}$ provide critique. Other Fields It is interesting to note that many other scientific fields do not have similar thresholds or do not, somehow, deal with the issue. I imagine this makes a bit sense in the case of experiments with humans where it is very costly (or impossible) to extend an experiment that gave a .05 or .01 significance. The result of not taking these effects into account is that over half of the published results may be wrong or at least are not reproducible (This has been argued for the case of psychology by Monya Baker $^{10}$, and I believe there are many others that made similar arguments. I personaly think that the situation may be even worse in nutritional science). And now, people from other fields than physics are thinking about how they should deal with this issue (the case of medicine/pharmacology$^{11}$). Cousins, R. D. (2017). The Jeffreys–Lindley paradox and discovery criteria in high energy physics. Synthese, 194(2), 395-432. arxiv link Dorigo, T. (2013) Demystifying The Five-Sigma Criterion, from science20.com 2019-03-07 Rosenfeld, A. H. (1968). Are there any far-out mesons or baryons? web-source: escholarship Burbidge, G., Roberts, M., Schneider, S., Sharp, N., & Tifft, W. (1990, November). Panel discussion: Redshift related problems. In NASA Conference Publication (Vol. 3098, p. 462). link to photocopy on harvard.edu Franklin, A. (2013). Shifting standards: Experiments in particle physics in the twentieth century. University of Pittsburgh Press. What does the 5 sigma mean? from physics.org 2019-03-07 Beringer, J., Arguin, J. F., Barnett, R. M., Copic, K., Dahl, O., Groom, D. E., ... & Yao, W. M. (2012). Review of particle physics. Physical Review D-Particles, Fields, Gravitation and Cosmology, 86(1), 010001. (section 36.2.2. Significance tests, page 394, link aps.org ) Lyons, L. (2013). Discovering the Significance of 5 sigma. arXiv preprint arXiv:1310.1284. arxiv link Lyons, L. (2014). Statistical Issues in Searches for New Physics. arXiv preprint arxiv link Baker, M. (2015). Over half of psychology studies fail reproducibility test. Nature News. from nature.com 2019-03-07 Horton, R. (2015). Offline: what is medicine's 5 sigma?. The Lancet, 385(9976), 1380. from thelancet.com 2019-03-07
Origin of "5$\sigma$" threshold for accepting evidence in particle physics? History and origin According to Robert D Cousins$^{1}$ and Tommaso Dorigo$^{2}$, the origin of the $5\sigma$ threshold origin lies in the early particle physics work of the 60s when numerous histogram
7,298
Origin of "5$\sigma$" threshold for accepting evidence in particle physics?
In most applications of statistics there is that old chestnut about 'all models are wrong, some are useful'. This being the case, we would only expected a model to perform at a given level since we are describing some incredibly complicated process using some simple model. Physics is very different, so intuition developed from statistical models isn't so appropriate. In Physics, in particular particle physics which deals directly with fundamental physical laws, the model really is supposed to be an exact description of reality. Any departure from what the model predicts must be completely explained by experimental noise, not a limitation of the model. This means that if the model is good and correct and the experimental apparatus understood the statistical significance should be very high, hence the high bar that is set. The other reason is historical, the particle physics community has been burned in the past by 'discoveries' at lower significance levels being later retracted, hence they are generally more cautious now.
Origin of "5$\sigma$" threshold for accepting evidence in particle physics?
In most applications of statistics there is that old chestnut about 'all models are wrong, some are useful'. This being the case, we would only expected a model to perform at a given level since we ar
Origin of "5$\sigma$" threshold for accepting evidence in particle physics? In most applications of statistics there is that old chestnut about 'all models are wrong, some are useful'. This being the case, we would only expected a model to perform at a given level since we are describing some incredibly complicated process using some simple model. Physics is very different, so intuition developed from statistical models isn't so appropriate. In Physics, in particular particle physics which deals directly with fundamental physical laws, the model really is supposed to be an exact description of reality. Any departure from what the model predicts must be completely explained by experimental noise, not a limitation of the model. This means that if the model is good and correct and the experimental apparatus understood the statistical significance should be very high, hence the high bar that is set. The other reason is historical, the particle physics community has been burned in the past by 'discoveries' at lower significance levels being later retracted, hence they are generally more cautious now.
Origin of "5$\sigma$" threshold for accepting evidence in particle physics? In most applications of statistics there is that old chestnut about 'all models are wrong, some are useful'. This being the case, we would only expected a model to perform at a given level since we ar
7,299
Origin of "5$\sigma$" threshold for accepting evidence in particle physics?
For a reason entirely different from that of physics, there are other fields with much more strict alphas when they engage in hypothesis testing. Genetic Epidemiology is among them, especially when they use "GWAS" (Genome-Wide Association Study) to look at various genetic markers for disease. Because a GWAS study is a massive exercise in multiple hypothesis testing, the state-of-the-art analysis techniques are all built around much more strict alphas than 0.05. Other such "candidate screening" study techniques that follow in the wake of the genomics studies will likely do the same.
Origin of "5$\sigma$" threshold for accepting evidence in particle physics?
For a reason entirely different from that of physics, there are other fields with much more strict alphas when they engage in hypothesis testing. Genetic Epidemiology is among them, especially when th
Origin of "5$\sigma$" threshold for accepting evidence in particle physics? For a reason entirely different from that of physics, there are other fields with much more strict alphas when they engage in hypothesis testing. Genetic Epidemiology is among them, especially when they use "GWAS" (Genome-Wide Association Study) to look at various genetic markers for disease. Because a GWAS study is a massive exercise in multiple hypothesis testing, the state-of-the-art analysis techniques are all built around much more strict alphas than 0.05. Other such "candidate screening" study techniques that follow in the wake of the genomics studies will likely do the same.
Origin of "5$\sigma$" threshold for accepting evidence in particle physics? For a reason entirely different from that of physics, there are other fields with much more strict alphas when they engage in hypothesis testing. Genetic Epidemiology is among them, especially when th
7,300
Origin of "5$\sigma$" threshold for accepting evidence in particle physics?
The level is so high to avoid premature announcements of news that later turns out to be spurious. For more discussion on this, see https://physics.stackexchange.com/questions/8752/standard-deviation-in-particle-physics?rq=1 https://physics.stackexchange.com/questions/31126/how-many-sigma-did-the-discovery-of-the-w-boson-have
Origin of "5$\sigma$" threshold for accepting evidence in particle physics?
The level is so high to avoid premature announcements of news that later turns out to be spurious. For more discussion on this, see https://physics.stackexchange.com/questions/8752/standard-deviation-
Origin of "5$\sigma$" threshold for accepting evidence in particle physics? The level is so high to avoid premature announcements of news that later turns out to be spurious. For more discussion on this, see https://physics.stackexchange.com/questions/8752/standard-deviation-in-particle-physics?rq=1 https://physics.stackexchange.com/questions/31126/how-many-sigma-did-the-discovery-of-the-w-boson-have
Origin of "5$\sigma$" threshold for accepting evidence in particle physics? The level is so high to avoid premature announcements of news that later turns out to be spurious. For more discussion on this, see https://physics.stackexchange.com/questions/8752/standard-deviation-